"
]
},
"execution_count": 151,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"import IPython.display as ipd\n",
"import numpy as np\n",
"import random\n",
"\n",
"rand_int = random.randint(0, len(augmented_samples_to_add)-1)\n",
"print(rand_int)\n",
"\n",
"print(augmented_samples_to_add[rand_int][\"labels\"])\n",
"ipd.Audio(data=augmented_samples_to_add[rand_int][\"input_values\"], autoplay=True, rate=16000)"
]
},
{
"cell_type": "code",
"execution_count": 152,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Dataset({\n",
" features: ['input_values', 'input_length', 'labels'],\n",
" num_rows: 3000\n",
"})"
]
},
"execution_count": 152,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"augmented_samples_to_add"
]
},
{
"cell_type": "code",
"execution_count": 40,
"metadata": {},
"outputs": [],
"source": [
"common_voice_train_audio_augmented = concatenate_datasets([common_voice_train_audio, augmented_samples_to_add])"
]
},
{
"cell_type": "code",
"execution_count": 41,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Dataset({\n",
" features: ['input_values', 'input_length', 'labels'],\n",
" num_rows: 22453\n",
"})"
]
},
"execution_count": 41,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"common_voice_train_audio_augmented"
]
},
{
"cell_type": "code",
"execution_count": 42,
"metadata": {
"id": "tdHfbUJ_09iA"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"train before filtering:\n",
"Dataset({\n",
" features: ['input_values', 'input_length', 'labels'],\n",
" num_rows: 22453\n",
"})\n"
]
},
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "c088e77593c345f496faf74b7a374823",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
" 0%| | 0/23 [00:00, ?ba/s]"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "1be91e8c854944c79a58a4380a8875db",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
" 0%| | 0/23 [00:00, ?ba/s]"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"train after filtering:\n",
"Dataset({\n",
" features: ['input_values', 'input_length', 'labels'],\n",
" num_rows: 22174\n",
"})\n"
]
}
],
"source": [
"max_input_length_in_sec = 30\n",
"min_input_length_in_sec = 2\n",
"print('train before filtering:')\n",
"print(common_voice_train_audio_augmented)\n",
"common_voice_train_audio_augmented = common_voice_train_audio_augmented.filter(lambda x: x < max_input_length_in_sec * processor.feature_extractor.sampling_rate, input_columns=[\"input_length\"])\n",
"common_voice_train_audio_augmented = common_voice_train_audio_augmented.filter(lambda x: x > min_input_length_in_sec * processor.feature_extractor.sampling_rate, input_columns=[\"input_length\"])\n",
"print('train after filtering:')\n",
"print(common_voice_train_audio_augmented)\n",
"# print('\\n')\n",
"# print('test before filtering:')\n",
"# print(common_voice_test_audio)\n",
"# common_voice_test_audio = common_voice_test_audio.filter(lambda x: x < max_input_length_in_sec * processor.feature_extractor.sampling_rate, input_columns=[\"input_length\"])\n",
"# common_voice_test_audio = common_voice_test_audio.filter(lambda x: x > min_input_length_in_sec * processor.feature_extractor.sampling_rate, input_columns=[\"input_length\"])\n",
"# print('test after filtering:')\n",
"# print(common_voice_test)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "1ZWDCCKqwcfS"
},
"source": [
"Awesome, now we are ready to start training!"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "gYlQkKVoRUos"
},
"source": [
"## Training\n",
"\n",
"The data is processed so that we are ready to start setting up the training pipeline. We will make use of 🤗's [Trainer](https://huggingface.co/transformers/master/main_classes/trainer.html?highlight=trainer) for which we essentially need to do the following:\n",
"\n",
"- Define a data collator. In contrast to most NLP models, XLS-R has a much larger input length than output length. *E.g.*, a sample of input length 50000 has an output length of no more than 100. Given the large input sizes, it is much more efficient to pad the training batches dynamically meaning that all training samples should only be padded to the longest sample in their batch and not the overall longest sample. Therefore, fine-tuning XLS-R requires a special padding data collator, which we will define below\n",
"\n",
"- Evaluation metric. During training, the model should be evaluated on the word error rate. We should define a `compute_metrics` function accordingly\n",
"\n",
"- Load a pretrained checkpoint. We need to load a pretrained checkpoint and configure it correctly for training.\n",
"\n",
"- Define the training configuration.\n",
"\n",
"After having fine-tuned the model, we will correctly evaluate it on the test data and verify that it has indeed learned to correctly transcribe speech."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "Slk403unUS91"
},
"source": [
"### Set-up Trainer\n",
"\n",
"Let's start by defining the data collator. The code for the data collator was copied from [this example](https://github.com/huggingface/transformers/blob/7e61d56a45c19284cfda0cee8995fb552f6b1f4e/examples/pytorch/speech-recognition/run_speech_recognition_ctc.py#L219).\n",
"\n",
"Without going into too many details, in contrast to the common data collators, this data collator treats the `input_values` and `labels` differently and thus applies to separate padding functions on them (again making use of XLS-R processor's context manager). This is necessary because in speech input and output are of different modalities meaning that they should not be treated by the same padding function.\n",
"Analogous to the common data collators, the padding tokens in the labels with `-100` so that those tokens are **not** taken into account when computing the loss."
]
},
{
"cell_type": "code",
"execution_count": 43,
"metadata": {
"id": "tborvC9hx88e"
},
"outputs": [],
"source": [
"import torch\n",
"\n",
"from dataclasses import dataclass, field\n",
"from typing import Any, Dict, List, Optional, Union\n",
"\n",
"@dataclass\n",
"class DataCollatorCTCWithPadding:\n",
" \"\"\"\n",
" Data collator that will dynamically pad the inputs received.\n",
" Args:\n",
" processor (:class:`~transformers.Wav2Vec2Processor`)\n",
" The processor used for proccessing the data.\n",
" padding (:obj:`bool`, :obj:`str` or :class:`~transformers.tokenization_utils_base.PaddingStrategy`, `optional`, defaults to :obj:`True`):\n",
" Select a strategy to pad the returned sequences (according to the model's padding side and padding index)\n",
" among:\n",
" * :obj:`True` or :obj:`'longest'`: Pad to the longest sequence in the batch (or no padding if only a single\n",
" sequence if provided).\n",
" * :obj:`'max_length'`: Pad to a maximum length specified with the argument :obj:`max_length` or to the\n",
" maximum acceptable input length for the model if that argument is not provided.\n",
" * :obj:`False` or :obj:`'do_not_pad'` (default): No padding (i.e., can output a batch with sequences of\n",
" different lengths).\n",
" \"\"\"\n",
"\n",
" processor: Wav2Vec2Processor\n",
" padding: Union[bool, str] = True\n",
"\n",
" def __call__(self, features: List[Dict[str, Union[List[int], torch.Tensor]]]) -> Dict[str, torch.Tensor]:\n",
" # split inputs and labels since they have to be of different lenghts and need\n",
" # different padding methods\n",
" input_features = [{\"input_values\": feature[\"input_values\"]} for feature in features]\n",
" label_features = [{\"input_ids\": feature[\"labels\"]} for feature in features]\n",
"\n",
" batch = self.processor.pad(\n",
" input_features,\n",
" padding=self.padding,\n",
" return_tensors=\"pt\",\n",
" )\n",
" with self.processor.as_target_processor():\n",
" labels_batch = self.processor.pad(\n",
" label_features,\n",
" padding=self.padding,\n",
" return_tensors=\"pt\",\n",
" )\n",
"\n",
" # replace padding with -100 to ignore loss correctly\n",
" labels = labels_batch[\"input_ids\"].masked_fill(labels_batch.attention_mask.ne(1), -100)\n",
"\n",
" batch[\"labels\"] = labels\n",
"\n",
" return batch"
]
},
{
"cell_type": "code",
"execution_count": 44,
"metadata": {
"id": "lbQf5GuZyQ4_"
},
"outputs": [],
"source": [
"data_collator = DataCollatorCTCWithPadding(processor=processor, padding=True)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "xO-Zdj-5cxXp"
},
"source": [
"Next, the evaluation metric is defined. As mentioned earlier, the \n",
"predominant metric in ASR is the word error rate (WER), hence we will use it in this notebook as well."
]
},
{
"cell_type": "code",
"execution_count": 45,
"metadata": {
"id": "9Xsux2gmyXso",
"outputId": "4ae12795-d6ac-4b51-ff84-748c8a3c8bc9"
},
"outputs": [],
"source": [
"wer_metric = load_metric(\"wer\")"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "E1qZU5p-deqB"
},
"source": [
"The model will return a sequence of logit vectors:\n",
"$\\mathbf{y}_1, \\ldots, \\mathbf{y}_m$ with $\\mathbf{y}_1 = f_{\\theta}(x_1, \\ldots, x_n)[0]$ and $n >> m$.\n",
"\n",
"A logit vector $\\mathbf{y}_1$ contains the log-odds for each word in the vocabulary we defined earlier, thus $\\text{len}(\\mathbf{y}_i) =$ `config.vocab_size`. We are interested in the most likely prediction of the model and thus take the `argmax(...)` of the logits. Also, we transform the encoded labels back to the original string by replacing `-100` with the `pad_token_id` and decoding the ids while making sure that consecutive tokens are **not** grouped to the same token in CTC style ${}^1$."
]
},
{
"cell_type": "code",
"execution_count": 46,
"metadata": {
"id": "1XZ-kjweyTy_"
},
"outputs": [],
"source": [
"def compute_metrics(pred):\n",
" pred_logits = pred.predictions\n",
" pred_ids = np.argmax(pred_logits, axis=-1)\n",
"\n",
" pred.label_ids[pred.label_ids == -100] = processor.tokenizer.pad_token_id\n",
"\n",
" pred_str = processor.batch_decode(pred_ids)\n",
" # we do not want to group tokens when computing the metrics\n",
" label_str = processor.batch_decode(pred.label_ids, group_tokens=False)\n",
"\n",
" wer = wer_metric.compute(predictions=pred_str, references=label_str)\n",
"\n",
" return {\"wer\": wer}"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "Xmgrx4bRwLIH"
},
"source": [
"Now, we can load the pretrained checkpoint of [Wav2Vec2-XLS-R-300M](https://huggingface.co/facebook/wav2vec2-xls-r-300m). The tokenizer's `pad_token_id` must be to define the model's `pad_token_id` or in the case of `Wav2Vec2ForCTC` also CTC's *blank token* ${}^2$. To save GPU memory, we enable PyTorch's [gradient checkpointing](https://pytorch.org/docs/stable/checkpoint.html) and also set the loss reduction to \"*mean*\".\n",
"\n",
"Because the dataset is quite small (~6h of training data) and because Common Voice is quite noisy, fine-tuning Facebook's [wav2vec2-xls-r-300m checkpoint](https://huggingface.co/facebook/wav2vec2-xls-r-300m) seems to require some hyper-parameter tuning. Therefore, I had to play around a bit with different values for dropout, [SpecAugment](https://arxiv.org/abs/1904.08779)'s masking dropout rate, layer dropout, and the learning rate until training seemed to be stable enough. \n",
"\n",
"**Note**: When using this notebook to train XLS-R on another language of Common Voice those hyper-parameter settings might not work very well. Feel free to adapt those depending on your use case. "
]
},
{
"cell_type": "code",
"execution_count": 47,
"metadata": {
"collapsed": true,
"jupyter": {
"outputs_hidden": true
}
},
"outputs": [
{
"data": {
"text/plain": [
"Wav2Vec2ForCTC(\n",
" (wav2vec2): Wav2Vec2Model(\n",
" (feature_extractor): Wav2Vec2FeatureEncoder(\n",
" (conv_layers): ModuleList(\n",
" (0): Wav2Vec2LayerNormConvLayer(\n",
" (conv): Conv1d(1, 512, kernel_size=(10,), stride=(5,))\n",
" (layer_norm): LayerNorm((512,), eps=1e-05, elementwise_affine=True)\n",
" )\n",
" (1): Wav2Vec2LayerNormConvLayer(\n",
" (conv): Conv1d(512, 512, kernel_size=(3,), stride=(2,))\n",
" (layer_norm): LayerNorm((512,), eps=1e-05, elementwise_affine=True)\n",
" )\n",
" (2): Wav2Vec2LayerNormConvLayer(\n",
" (conv): Conv1d(512, 512, kernel_size=(3,), stride=(2,))\n",
" (layer_norm): LayerNorm((512,), eps=1e-05, elementwise_affine=True)\n",
" )\n",
" (3): Wav2Vec2LayerNormConvLayer(\n",
" (conv): Conv1d(512, 512, kernel_size=(3,), stride=(2,))\n",
" (layer_norm): LayerNorm((512,), eps=1e-05, elementwise_affine=True)\n",
" )\n",
" (4): Wav2Vec2LayerNormConvLayer(\n",
" (conv): Conv1d(512, 512, kernel_size=(3,), stride=(2,))\n",
" (layer_norm): LayerNorm((512,), eps=1e-05, elementwise_affine=True)\n",
" )\n",
" (5): Wav2Vec2LayerNormConvLayer(\n",
" (conv): Conv1d(512, 512, kernel_size=(2,), stride=(2,))\n",
" (layer_norm): LayerNorm((512,), eps=1e-05, elementwise_affine=True)\n",
" )\n",
" (6): Wav2Vec2LayerNormConvLayer(\n",
" (conv): Conv1d(512, 512, kernel_size=(2,), stride=(2,))\n",
" (layer_norm): LayerNorm((512,), eps=1e-05, elementwise_affine=True)\n",
" )\n",
" )\n",
" )\n",
" (feature_projection): Wav2Vec2FeatureProjection(\n",
" (layer_norm): LayerNorm((512,), eps=1e-05, elementwise_affine=True)\n",
" (projection): Linear(in_features=512, out_features=1280, bias=True)\n",
" (dropout): Dropout(p=0.04, inplace=False)\n",
" )\n",
" (encoder): Wav2Vec2EncoderStableLayerNorm(\n",
" (pos_conv_embed): Wav2Vec2PositionalConvEmbedding(\n",
" (conv): Conv1d(1280, 1280, kernel_size=(128,), stride=(1,), padding=(64,), groups=16)\n",
" (padding): Wav2Vec2SamePadLayer()\n",
" )\n",
" (layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" (dropout): Dropout(p=0.047, inplace=False)\n",
" (layers): ModuleList(\n",
" (0): Wav2Vec2EncoderLayerStableLayerNorm(\n",
" (attention): Wav2Vec2Attention(\n",
" (k_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (v_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (q_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (out_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" )\n",
" (dropout): Dropout(p=0.047, inplace=False)\n",
" (layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" (feed_forward): Wav2Vec2FeedForward(\n",
" (intermediate_dropout): Dropout(p=0.055, inplace=False)\n",
" (intermediate_dense): Linear(in_features=1280, out_features=5120, bias=True)\n",
" (output_dense): Linear(in_features=5120, out_features=1280, bias=True)\n",
" (output_dropout): Dropout(p=0.047, inplace=False)\n",
" )\n",
" (final_layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" )\n",
" (1): Wav2Vec2EncoderLayerStableLayerNorm(\n",
" (attention): Wav2Vec2Attention(\n",
" (k_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (v_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (q_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (out_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" )\n",
" (dropout): Dropout(p=0.047, inplace=False)\n",
" (layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" (feed_forward): Wav2Vec2FeedForward(\n",
" (intermediate_dropout): Dropout(p=0.055, inplace=False)\n",
" (intermediate_dense): Linear(in_features=1280, out_features=5120, bias=True)\n",
" (output_dense): Linear(in_features=5120, out_features=1280, bias=True)\n",
" (output_dropout): Dropout(p=0.047, inplace=False)\n",
" )\n",
" (final_layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" )\n",
" (2): Wav2Vec2EncoderLayerStableLayerNorm(\n",
" (attention): Wav2Vec2Attention(\n",
" (k_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (v_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (q_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (out_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" )\n",
" (dropout): Dropout(p=0.047, inplace=False)\n",
" (layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" (feed_forward): Wav2Vec2FeedForward(\n",
" (intermediate_dropout): Dropout(p=0.055, inplace=False)\n",
" (intermediate_dense): Linear(in_features=1280, out_features=5120, bias=True)\n",
" (output_dense): Linear(in_features=5120, out_features=1280, bias=True)\n",
" (output_dropout): Dropout(p=0.047, inplace=False)\n",
" )\n",
" (final_layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" )\n",
" (3): Wav2Vec2EncoderLayerStableLayerNorm(\n",
" (attention): Wav2Vec2Attention(\n",
" (k_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (v_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (q_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (out_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" )\n",
" (dropout): Dropout(p=0.047, inplace=False)\n",
" (layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" (feed_forward): Wav2Vec2FeedForward(\n",
" (intermediate_dropout): Dropout(p=0.055, inplace=False)\n",
" (intermediate_dense): Linear(in_features=1280, out_features=5120, bias=True)\n",
" (output_dense): Linear(in_features=5120, out_features=1280, bias=True)\n",
" (output_dropout): Dropout(p=0.047, inplace=False)\n",
" )\n",
" (final_layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" )\n",
" (4): Wav2Vec2EncoderLayerStableLayerNorm(\n",
" (attention): Wav2Vec2Attention(\n",
" (k_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (v_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (q_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (out_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" )\n",
" (dropout): Dropout(p=0.047, inplace=False)\n",
" (layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" (feed_forward): Wav2Vec2FeedForward(\n",
" (intermediate_dropout): Dropout(p=0.055, inplace=False)\n",
" (intermediate_dense): Linear(in_features=1280, out_features=5120, bias=True)\n",
" (output_dense): Linear(in_features=5120, out_features=1280, bias=True)\n",
" (output_dropout): Dropout(p=0.047, inplace=False)\n",
" )\n",
" (final_layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" )\n",
" (5): Wav2Vec2EncoderLayerStableLayerNorm(\n",
" (attention): Wav2Vec2Attention(\n",
" (k_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (v_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (q_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (out_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" )\n",
" (dropout): Dropout(p=0.047, inplace=False)\n",
" (layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" (feed_forward): Wav2Vec2FeedForward(\n",
" (intermediate_dropout): Dropout(p=0.055, inplace=False)\n",
" (intermediate_dense): Linear(in_features=1280, out_features=5120, bias=True)\n",
" (output_dense): Linear(in_features=5120, out_features=1280, bias=True)\n",
" (output_dropout): Dropout(p=0.047, inplace=False)\n",
" )\n",
" (final_layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" )\n",
" (6): Wav2Vec2EncoderLayerStableLayerNorm(\n",
" (attention): Wav2Vec2Attention(\n",
" (k_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (v_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (q_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (out_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" )\n",
" (dropout): Dropout(p=0.047, inplace=False)\n",
" (layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" (feed_forward): Wav2Vec2FeedForward(\n",
" (intermediate_dropout): Dropout(p=0.055, inplace=False)\n",
" (intermediate_dense): Linear(in_features=1280, out_features=5120, bias=True)\n",
" (output_dense): Linear(in_features=5120, out_features=1280, bias=True)\n",
" (output_dropout): Dropout(p=0.047, inplace=False)\n",
" )\n",
" (final_layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" )\n",
" (7): Wav2Vec2EncoderLayerStableLayerNorm(\n",
" (attention): Wav2Vec2Attention(\n",
" (k_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (v_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (q_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (out_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" )\n",
" (dropout): Dropout(p=0.047, inplace=False)\n",
" (layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" (feed_forward): Wav2Vec2FeedForward(\n",
" (intermediate_dropout): Dropout(p=0.055, inplace=False)\n",
" (intermediate_dense): Linear(in_features=1280, out_features=5120, bias=True)\n",
" (output_dense): Linear(in_features=5120, out_features=1280, bias=True)\n",
" (output_dropout): Dropout(p=0.047, inplace=False)\n",
" )\n",
" (final_layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" )\n",
" (8): Wav2Vec2EncoderLayerStableLayerNorm(\n",
" (attention): Wav2Vec2Attention(\n",
" (k_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (v_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (q_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (out_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" )\n",
" (dropout): Dropout(p=0.047, inplace=False)\n",
" (layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" (feed_forward): Wav2Vec2FeedForward(\n",
" (intermediate_dropout): Dropout(p=0.055, inplace=False)\n",
" (intermediate_dense): Linear(in_features=1280, out_features=5120, bias=True)\n",
" (output_dense): Linear(in_features=5120, out_features=1280, bias=True)\n",
" (output_dropout): Dropout(p=0.047, inplace=False)\n",
" )\n",
" (final_layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" )\n",
" (9): Wav2Vec2EncoderLayerStableLayerNorm(\n",
" (attention): Wav2Vec2Attention(\n",
" (k_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (v_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (q_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (out_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" )\n",
" (dropout): Dropout(p=0.047, inplace=False)\n",
" (layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" (feed_forward): Wav2Vec2FeedForward(\n",
" (intermediate_dropout): Dropout(p=0.055, inplace=False)\n",
" (intermediate_dense): Linear(in_features=1280, out_features=5120, bias=True)\n",
" (output_dense): Linear(in_features=5120, out_features=1280, bias=True)\n",
" (output_dropout): Dropout(p=0.047, inplace=False)\n",
" )\n",
" (final_layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" )\n",
" (10): Wav2Vec2EncoderLayerStableLayerNorm(\n",
" (attention): Wav2Vec2Attention(\n",
" (k_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (v_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (q_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (out_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" )\n",
" (dropout): Dropout(p=0.047, inplace=False)\n",
" (layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" (feed_forward): Wav2Vec2FeedForward(\n",
" (intermediate_dropout): Dropout(p=0.055, inplace=False)\n",
" (intermediate_dense): Linear(in_features=1280, out_features=5120, bias=True)\n",
" (output_dense): Linear(in_features=5120, out_features=1280, bias=True)\n",
" (output_dropout): Dropout(p=0.047, inplace=False)\n",
" )\n",
" (final_layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" )\n",
" (11): Wav2Vec2EncoderLayerStableLayerNorm(\n",
" (attention): Wav2Vec2Attention(\n",
" (k_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (v_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (q_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (out_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" )\n",
" (dropout): Dropout(p=0.047, inplace=False)\n",
" (layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" (feed_forward): Wav2Vec2FeedForward(\n",
" (intermediate_dropout): Dropout(p=0.055, inplace=False)\n",
" (intermediate_dense): Linear(in_features=1280, out_features=5120, bias=True)\n",
" (output_dense): Linear(in_features=5120, out_features=1280, bias=True)\n",
" (output_dropout): Dropout(p=0.047, inplace=False)\n",
" )\n",
" (final_layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" )\n",
" (12): Wav2Vec2EncoderLayerStableLayerNorm(\n",
" (attention): Wav2Vec2Attention(\n",
" (k_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (v_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (q_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (out_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" )\n",
" (dropout): Dropout(p=0.047, inplace=False)\n",
" (layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" (feed_forward): Wav2Vec2FeedForward(\n",
" (intermediate_dropout): Dropout(p=0.055, inplace=False)\n",
" (intermediate_dense): Linear(in_features=1280, out_features=5120, bias=True)\n",
" (output_dense): Linear(in_features=5120, out_features=1280, bias=True)\n",
" (output_dropout): Dropout(p=0.047, inplace=False)\n",
" )\n",
" (final_layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" )\n",
" (13): Wav2Vec2EncoderLayerStableLayerNorm(\n",
" (attention): Wav2Vec2Attention(\n",
" (k_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (v_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (q_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (out_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" )\n",
" (dropout): Dropout(p=0.047, inplace=False)\n",
" (layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" (feed_forward): Wav2Vec2FeedForward(\n",
" (intermediate_dropout): Dropout(p=0.055, inplace=False)\n",
" (intermediate_dense): Linear(in_features=1280, out_features=5120, bias=True)\n",
" (output_dense): Linear(in_features=5120, out_features=1280, bias=True)\n",
" (output_dropout): Dropout(p=0.047, inplace=False)\n",
" )\n",
" (final_layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" )\n",
" (14): Wav2Vec2EncoderLayerStableLayerNorm(\n",
" (attention): Wav2Vec2Attention(\n",
" (k_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (v_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (q_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (out_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" )\n",
" (dropout): Dropout(p=0.047, inplace=False)\n",
" (layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" (feed_forward): Wav2Vec2FeedForward(\n",
" (intermediate_dropout): Dropout(p=0.055, inplace=False)\n",
" (intermediate_dense): Linear(in_features=1280, out_features=5120, bias=True)\n",
" (output_dense): Linear(in_features=5120, out_features=1280, bias=True)\n",
" (output_dropout): Dropout(p=0.047, inplace=False)\n",
" )\n",
" (final_layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" )\n",
" (15): Wav2Vec2EncoderLayerStableLayerNorm(\n",
" (attention): Wav2Vec2Attention(\n",
" (k_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (v_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (q_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (out_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" )\n",
" (dropout): Dropout(p=0.047, inplace=False)\n",
" (layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" (feed_forward): Wav2Vec2FeedForward(\n",
" (intermediate_dropout): Dropout(p=0.055, inplace=False)\n",
" (intermediate_dense): Linear(in_features=1280, out_features=5120, bias=True)\n",
" (output_dense): Linear(in_features=5120, out_features=1280, bias=True)\n",
" (output_dropout): Dropout(p=0.047, inplace=False)\n",
" )\n",
" (final_layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" )\n",
" (16): Wav2Vec2EncoderLayerStableLayerNorm(\n",
" (attention): Wav2Vec2Attention(\n",
" (k_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (v_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (q_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (out_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" )\n",
" (dropout): Dropout(p=0.047, inplace=False)\n",
" (layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" (feed_forward): Wav2Vec2FeedForward(\n",
" (intermediate_dropout): Dropout(p=0.055, inplace=False)\n",
" (intermediate_dense): Linear(in_features=1280, out_features=5120, bias=True)\n",
" (output_dense): Linear(in_features=5120, out_features=1280, bias=True)\n",
" (output_dropout): Dropout(p=0.047, inplace=False)\n",
" )\n",
" (final_layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" )\n",
" (17): Wav2Vec2EncoderLayerStableLayerNorm(\n",
" (attention): Wav2Vec2Attention(\n",
" (k_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (v_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (q_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (out_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" )\n",
" (dropout): Dropout(p=0.047, inplace=False)\n",
" (layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" (feed_forward): Wav2Vec2FeedForward(\n",
" (intermediate_dropout): Dropout(p=0.055, inplace=False)\n",
" (intermediate_dense): Linear(in_features=1280, out_features=5120, bias=True)\n",
" (output_dense): Linear(in_features=5120, out_features=1280, bias=True)\n",
" (output_dropout): Dropout(p=0.047, inplace=False)\n",
" )\n",
" (final_layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" )\n",
" (18): Wav2Vec2EncoderLayerStableLayerNorm(\n",
" (attention): Wav2Vec2Attention(\n",
" (k_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (v_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (q_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (out_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" )\n",
" (dropout): Dropout(p=0.047, inplace=False)\n",
" (layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" (feed_forward): Wav2Vec2FeedForward(\n",
" (intermediate_dropout): Dropout(p=0.055, inplace=False)\n",
" (intermediate_dense): Linear(in_features=1280, out_features=5120, bias=True)\n",
" (output_dense): Linear(in_features=5120, out_features=1280, bias=True)\n",
" (output_dropout): Dropout(p=0.047, inplace=False)\n",
" )\n",
" (final_layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" )\n",
" (19): Wav2Vec2EncoderLayerStableLayerNorm(\n",
" (attention): Wav2Vec2Attention(\n",
" (k_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (v_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (q_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (out_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" )\n",
" (dropout): Dropout(p=0.047, inplace=False)\n",
" (layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" (feed_forward): Wav2Vec2FeedForward(\n",
" (intermediate_dropout): Dropout(p=0.055, inplace=False)\n",
" (intermediate_dense): Linear(in_features=1280, out_features=5120, bias=True)\n",
" (output_dense): Linear(in_features=5120, out_features=1280, bias=True)\n",
" (output_dropout): Dropout(p=0.047, inplace=False)\n",
" )\n",
" (final_layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" )\n",
" (20): Wav2Vec2EncoderLayerStableLayerNorm(\n",
" (attention): Wav2Vec2Attention(\n",
" (k_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (v_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (q_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (out_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" )\n",
" (dropout): Dropout(p=0.047, inplace=False)\n",
" (layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" (feed_forward): Wav2Vec2FeedForward(\n",
" (intermediate_dropout): Dropout(p=0.055, inplace=False)\n",
" (intermediate_dense): Linear(in_features=1280, out_features=5120, bias=True)\n",
" (output_dense): Linear(in_features=5120, out_features=1280, bias=True)\n",
" (output_dropout): Dropout(p=0.047, inplace=False)\n",
" )\n",
" (final_layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" )\n",
" (21): Wav2Vec2EncoderLayerStableLayerNorm(\n",
" (attention): Wav2Vec2Attention(\n",
" (k_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (v_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (q_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (out_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" )\n",
" (dropout): Dropout(p=0.047, inplace=False)\n",
" (layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" (feed_forward): Wav2Vec2FeedForward(\n",
" (intermediate_dropout): Dropout(p=0.055, inplace=False)\n",
" (intermediate_dense): Linear(in_features=1280, out_features=5120, bias=True)\n",
" (output_dense): Linear(in_features=5120, out_features=1280, bias=True)\n",
" (output_dropout): Dropout(p=0.047, inplace=False)\n",
" )\n",
" (final_layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" )\n",
" (22): Wav2Vec2EncoderLayerStableLayerNorm(\n",
" (attention): Wav2Vec2Attention(\n",
" (k_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (v_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (q_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (out_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" )\n",
" (dropout): Dropout(p=0.047, inplace=False)\n",
" (layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" (feed_forward): Wav2Vec2FeedForward(\n",
" (intermediate_dropout): Dropout(p=0.055, inplace=False)\n",
" (intermediate_dense): Linear(in_features=1280, out_features=5120, bias=True)\n",
" (output_dense): Linear(in_features=5120, out_features=1280, bias=True)\n",
" (output_dropout): Dropout(p=0.047, inplace=False)\n",
" )\n",
" (final_layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" )\n",
" (23): Wav2Vec2EncoderLayerStableLayerNorm(\n",
" (attention): Wav2Vec2Attention(\n",
" (k_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (v_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (q_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (out_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" )\n",
" (dropout): Dropout(p=0.047, inplace=False)\n",
" (layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" (feed_forward): Wav2Vec2FeedForward(\n",
" (intermediate_dropout): Dropout(p=0.055, inplace=False)\n",
" (intermediate_dense): Linear(in_features=1280, out_features=5120, bias=True)\n",
" (output_dense): Linear(in_features=5120, out_features=1280, bias=True)\n",
" (output_dropout): Dropout(p=0.047, inplace=False)\n",
" )\n",
" (final_layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" )\n",
" (24): Wav2Vec2EncoderLayerStableLayerNorm(\n",
" (attention): Wav2Vec2Attention(\n",
" (k_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (v_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (q_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (out_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" )\n",
" (dropout): Dropout(p=0.047, inplace=False)\n",
" (layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" (feed_forward): Wav2Vec2FeedForward(\n",
" (intermediate_dropout): Dropout(p=0.055, inplace=False)\n",
" (intermediate_dense): Linear(in_features=1280, out_features=5120, bias=True)\n",
" (output_dense): Linear(in_features=5120, out_features=1280, bias=True)\n",
" (output_dropout): Dropout(p=0.047, inplace=False)\n",
" )\n",
" (final_layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" )\n",
" (25): Wav2Vec2EncoderLayerStableLayerNorm(\n",
" (attention): Wav2Vec2Attention(\n",
" (k_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (v_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (q_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (out_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" )\n",
" (dropout): Dropout(p=0.047, inplace=False)\n",
" (layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" (feed_forward): Wav2Vec2FeedForward(\n",
" (intermediate_dropout): Dropout(p=0.055, inplace=False)\n",
" (intermediate_dense): Linear(in_features=1280, out_features=5120, bias=True)\n",
" (output_dense): Linear(in_features=5120, out_features=1280, bias=True)\n",
" (output_dropout): Dropout(p=0.047, inplace=False)\n",
" )\n",
" (final_layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" )\n",
" (26): Wav2Vec2EncoderLayerStableLayerNorm(\n",
" (attention): Wav2Vec2Attention(\n",
" (k_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (v_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (q_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (out_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" )\n",
" (dropout): Dropout(p=0.047, inplace=False)\n",
" (layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" (feed_forward): Wav2Vec2FeedForward(\n",
" (intermediate_dropout): Dropout(p=0.055, inplace=False)\n",
" (intermediate_dense): Linear(in_features=1280, out_features=5120, bias=True)\n",
" (output_dense): Linear(in_features=5120, out_features=1280, bias=True)\n",
" (output_dropout): Dropout(p=0.047, inplace=False)\n",
" )\n",
" (final_layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" )\n",
" (27): Wav2Vec2EncoderLayerStableLayerNorm(\n",
" (attention): Wav2Vec2Attention(\n",
" (k_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (v_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (q_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (out_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" )\n",
" (dropout): Dropout(p=0.047, inplace=False)\n",
" (layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" (feed_forward): Wav2Vec2FeedForward(\n",
" (intermediate_dropout): Dropout(p=0.055, inplace=False)\n",
" (intermediate_dense): Linear(in_features=1280, out_features=5120, bias=True)\n",
" (output_dense): Linear(in_features=5120, out_features=1280, bias=True)\n",
" (output_dropout): Dropout(p=0.047, inplace=False)\n",
" )\n",
" (final_layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" )\n",
" (28): Wav2Vec2EncoderLayerStableLayerNorm(\n",
" (attention): Wav2Vec2Attention(\n",
" (k_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (v_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (q_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (out_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" )\n",
" (dropout): Dropout(p=0.047, inplace=False)\n",
" (layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" (feed_forward): Wav2Vec2FeedForward(\n",
" (intermediate_dropout): Dropout(p=0.055, inplace=False)\n",
" (intermediate_dense): Linear(in_features=1280, out_features=5120, bias=True)\n",
" (output_dense): Linear(in_features=5120, out_features=1280, bias=True)\n",
" (output_dropout): Dropout(p=0.047, inplace=False)\n",
" )\n",
" (final_layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" )\n",
" (29): Wav2Vec2EncoderLayerStableLayerNorm(\n",
" (attention): Wav2Vec2Attention(\n",
" (k_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (v_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (q_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (out_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" )\n",
" (dropout): Dropout(p=0.047, inplace=False)\n",
" (layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" (feed_forward): Wav2Vec2FeedForward(\n",
" (intermediate_dropout): Dropout(p=0.055, inplace=False)\n",
" (intermediate_dense): Linear(in_features=1280, out_features=5120, bias=True)\n",
" (output_dense): Linear(in_features=5120, out_features=1280, bias=True)\n",
" (output_dropout): Dropout(p=0.047, inplace=False)\n",
" )\n",
" (final_layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" )\n",
" (30): Wav2Vec2EncoderLayerStableLayerNorm(\n",
" (attention): Wav2Vec2Attention(\n",
" (k_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (v_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (q_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (out_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" )\n",
" (dropout): Dropout(p=0.047, inplace=False)\n",
" (layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" (feed_forward): Wav2Vec2FeedForward(\n",
" (intermediate_dropout): Dropout(p=0.055, inplace=False)\n",
" (intermediate_dense): Linear(in_features=1280, out_features=5120, bias=True)\n",
" (output_dense): Linear(in_features=5120, out_features=1280, bias=True)\n",
" (output_dropout): Dropout(p=0.047, inplace=False)\n",
" )\n",
" (final_layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" )\n",
" (31): Wav2Vec2EncoderLayerStableLayerNorm(\n",
" (attention): Wav2Vec2Attention(\n",
" (k_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (v_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (q_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (out_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" )\n",
" (dropout): Dropout(p=0.047, inplace=False)\n",
" (layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" (feed_forward): Wav2Vec2FeedForward(\n",
" (intermediate_dropout): Dropout(p=0.055, inplace=False)\n",
" (intermediate_dense): Linear(in_features=1280, out_features=5120, bias=True)\n",
" (output_dense): Linear(in_features=5120, out_features=1280, bias=True)\n",
" (output_dropout): Dropout(p=0.047, inplace=False)\n",
" )\n",
" (final_layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" )\n",
" (32): Wav2Vec2EncoderLayerStableLayerNorm(\n",
" (attention): Wav2Vec2Attention(\n",
" (k_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (v_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (q_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (out_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" )\n",
" (dropout): Dropout(p=0.047, inplace=False)\n",
" (layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" (feed_forward): Wav2Vec2FeedForward(\n",
" (intermediate_dropout): Dropout(p=0.055, inplace=False)\n",
" (intermediate_dense): Linear(in_features=1280, out_features=5120, bias=True)\n",
" (output_dense): Linear(in_features=5120, out_features=1280, bias=True)\n",
" (output_dropout): Dropout(p=0.047, inplace=False)\n",
" )\n",
" (final_layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" )\n",
" (33): Wav2Vec2EncoderLayerStableLayerNorm(\n",
" (attention): Wav2Vec2Attention(\n",
" (k_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (v_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (q_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (out_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" )\n",
" (dropout): Dropout(p=0.047, inplace=False)\n",
" (layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" (feed_forward): Wav2Vec2FeedForward(\n",
" (intermediate_dropout): Dropout(p=0.055, inplace=False)\n",
" (intermediate_dense): Linear(in_features=1280, out_features=5120, bias=True)\n",
" (output_dense): Linear(in_features=5120, out_features=1280, bias=True)\n",
" (output_dropout): Dropout(p=0.047, inplace=False)\n",
" )\n",
" (final_layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" )\n",
" (34): Wav2Vec2EncoderLayerStableLayerNorm(\n",
" (attention): Wav2Vec2Attention(\n",
" (k_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (v_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (q_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (out_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" )\n",
" (dropout): Dropout(p=0.047, inplace=False)\n",
" (layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" (feed_forward): Wav2Vec2FeedForward(\n",
" (intermediate_dropout): Dropout(p=0.055, inplace=False)\n",
" (intermediate_dense): Linear(in_features=1280, out_features=5120, bias=True)\n",
" (output_dense): Linear(in_features=5120, out_features=1280, bias=True)\n",
" (output_dropout): Dropout(p=0.047, inplace=False)\n",
" )\n",
" (final_layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" )\n",
" (35): Wav2Vec2EncoderLayerStableLayerNorm(\n",
" (attention): Wav2Vec2Attention(\n",
" (k_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (v_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (q_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (out_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" )\n",
" (dropout): Dropout(p=0.047, inplace=False)\n",
" (layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" (feed_forward): Wav2Vec2FeedForward(\n",
" (intermediate_dropout): Dropout(p=0.055, inplace=False)\n",
" (intermediate_dense): Linear(in_features=1280, out_features=5120, bias=True)\n",
" (output_dense): Linear(in_features=5120, out_features=1280, bias=True)\n",
" (output_dropout): Dropout(p=0.047, inplace=False)\n",
" )\n",
" (final_layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" )\n",
" (36): Wav2Vec2EncoderLayerStableLayerNorm(\n",
" (attention): Wav2Vec2Attention(\n",
" (k_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (v_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (q_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (out_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" )\n",
" (dropout): Dropout(p=0.047, inplace=False)\n",
" (layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" (feed_forward): Wav2Vec2FeedForward(\n",
" (intermediate_dropout): Dropout(p=0.055, inplace=False)\n",
" (intermediate_dense): Linear(in_features=1280, out_features=5120, bias=True)\n",
" (output_dense): Linear(in_features=5120, out_features=1280, bias=True)\n",
" (output_dropout): Dropout(p=0.047, inplace=False)\n",
" )\n",
" (final_layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" )\n",
" (37): Wav2Vec2EncoderLayerStableLayerNorm(\n",
" (attention): Wav2Vec2Attention(\n",
" (k_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (v_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (q_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (out_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" )\n",
" (dropout): Dropout(p=0.047, inplace=False)\n",
" (layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" (feed_forward): Wav2Vec2FeedForward(\n",
" (intermediate_dropout): Dropout(p=0.055, inplace=False)\n",
" (intermediate_dense): Linear(in_features=1280, out_features=5120, bias=True)\n",
" (output_dense): Linear(in_features=5120, out_features=1280, bias=True)\n",
" (output_dropout): Dropout(p=0.047, inplace=False)\n",
" )\n",
" (final_layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" )\n",
" (38): Wav2Vec2EncoderLayerStableLayerNorm(\n",
" (attention): Wav2Vec2Attention(\n",
" (k_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (v_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (q_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (out_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" )\n",
" (dropout): Dropout(p=0.047, inplace=False)\n",
" (layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" (feed_forward): Wav2Vec2FeedForward(\n",
" (intermediate_dropout): Dropout(p=0.055, inplace=False)\n",
" (intermediate_dense): Linear(in_features=1280, out_features=5120, bias=True)\n",
" (output_dense): Linear(in_features=5120, out_features=1280, bias=True)\n",
" (output_dropout): Dropout(p=0.047, inplace=False)\n",
" )\n",
" (final_layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" )\n",
" (39): Wav2Vec2EncoderLayerStableLayerNorm(\n",
" (attention): Wav2Vec2Attention(\n",
" (k_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (v_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (q_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (out_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" )\n",
" (dropout): Dropout(p=0.047, inplace=False)\n",
" (layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" (feed_forward): Wav2Vec2FeedForward(\n",
" (intermediate_dropout): Dropout(p=0.055, inplace=False)\n",
" (intermediate_dense): Linear(in_features=1280, out_features=5120, bias=True)\n",
" (output_dense): Linear(in_features=5120, out_features=1280, bias=True)\n",
" (output_dropout): Dropout(p=0.047, inplace=False)\n",
" )\n",
" (final_layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" )\n",
" (40): Wav2Vec2EncoderLayerStableLayerNorm(\n",
" (attention): Wav2Vec2Attention(\n",
" (k_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (v_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (q_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (out_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" )\n",
" (dropout): Dropout(p=0.047, inplace=False)\n",
" (layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" (feed_forward): Wav2Vec2FeedForward(\n",
" (intermediate_dropout): Dropout(p=0.055, inplace=False)\n",
" (intermediate_dense): Linear(in_features=1280, out_features=5120, bias=True)\n",
" (output_dense): Linear(in_features=5120, out_features=1280, bias=True)\n",
" (output_dropout): Dropout(p=0.047, inplace=False)\n",
" )\n",
" (final_layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" )\n",
" (41): Wav2Vec2EncoderLayerStableLayerNorm(\n",
" (attention): Wav2Vec2Attention(\n",
" (k_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (v_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (q_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (out_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" )\n",
" (dropout): Dropout(p=0.047, inplace=False)\n",
" (layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" (feed_forward): Wav2Vec2FeedForward(\n",
" (intermediate_dropout): Dropout(p=0.055, inplace=False)\n",
" (intermediate_dense): Linear(in_features=1280, out_features=5120, bias=True)\n",
" (output_dense): Linear(in_features=5120, out_features=1280, bias=True)\n",
" (output_dropout): Dropout(p=0.047, inplace=False)\n",
" )\n",
" (final_layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" )\n",
" (42): Wav2Vec2EncoderLayerStableLayerNorm(\n",
" (attention): Wav2Vec2Attention(\n",
" (k_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (v_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (q_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (out_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" )\n",
" (dropout): Dropout(p=0.047, inplace=False)\n",
" (layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" (feed_forward): Wav2Vec2FeedForward(\n",
" (intermediate_dropout): Dropout(p=0.055, inplace=False)\n",
" (intermediate_dense): Linear(in_features=1280, out_features=5120, bias=True)\n",
" (output_dense): Linear(in_features=5120, out_features=1280, bias=True)\n",
" (output_dropout): Dropout(p=0.047, inplace=False)\n",
" )\n",
" (final_layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" )\n",
" (43): Wav2Vec2EncoderLayerStableLayerNorm(\n",
" (attention): Wav2Vec2Attention(\n",
" (k_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (v_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (q_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (out_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" )\n",
" (dropout): Dropout(p=0.047, inplace=False)\n",
" (layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" (feed_forward): Wav2Vec2FeedForward(\n",
" (intermediate_dropout): Dropout(p=0.055, inplace=False)\n",
" (intermediate_dense): Linear(in_features=1280, out_features=5120, bias=True)\n",
" (output_dense): Linear(in_features=5120, out_features=1280, bias=True)\n",
" (output_dropout): Dropout(p=0.047, inplace=False)\n",
" )\n",
" (final_layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" )\n",
" (44): Wav2Vec2EncoderLayerStableLayerNorm(\n",
" (attention): Wav2Vec2Attention(\n",
" (k_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (v_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (q_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (out_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" )\n",
" (dropout): Dropout(p=0.047, inplace=False)\n",
" (layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" (feed_forward): Wav2Vec2FeedForward(\n",
" (intermediate_dropout): Dropout(p=0.055, inplace=False)\n",
" (intermediate_dense): Linear(in_features=1280, out_features=5120, bias=True)\n",
" (output_dense): Linear(in_features=5120, out_features=1280, bias=True)\n",
" (output_dropout): Dropout(p=0.047, inplace=False)\n",
" )\n",
" (final_layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" )\n",
" (45): Wav2Vec2EncoderLayerStableLayerNorm(\n",
" (attention): Wav2Vec2Attention(\n",
" (k_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (v_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (q_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (out_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" )\n",
" (dropout): Dropout(p=0.047, inplace=False)\n",
" (layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" (feed_forward): Wav2Vec2FeedForward(\n",
" (intermediate_dropout): Dropout(p=0.055, inplace=False)\n",
" (intermediate_dense): Linear(in_features=1280, out_features=5120, bias=True)\n",
" (output_dense): Linear(in_features=5120, out_features=1280, bias=True)\n",
" (output_dropout): Dropout(p=0.047, inplace=False)\n",
" )\n",
" (final_layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" )\n",
" (46): Wav2Vec2EncoderLayerStableLayerNorm(\n",
" (attention): Wav2Vec2Attention(\n",
" (k_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (v_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (q_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (out_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" )\n",
" (dropout): Dropout(p=0.047, inplace=False)\n",
" (layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" (feed_forward): Wav2Vec2FeedForward(\n",
" (intermediate_dropout): Dropout(p=0.055, inplace=False)\n",
" (intermediate_dense): Linear(in_features=1280, out_features=5120, bias=True)\n",
" (output_dense): Linear(in_features=5120, out_features=1280, bias=True)\n",
" (output_dropout): Dropout(p=0.047, inplace=False)\n",
" )\n",
" (final_layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" )\n",
" (47): Wav2Vec2EncoderLayerStableLayerNorm(\n",
" (attention): Wav2Vec2Attention(\n",
" (k_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (v_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (q_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (out_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" )\n",
" (dropout): Dropout(p=0.047, inplace=False)\n",
" (layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" (feed_forward): Wav2Vec2FeedForward(\n",
" (intermediate_dropout): Dropout(p=0.055, inplace=False)\n",
" (intermediate_dense): Linear(in_features=1280, out_features=5120, bias=True)\n",
" (output_dense): Linear(in_features=5120, out_features=1280, bias=True)\n",
" (output_dropout): Dropout(p=0.047, inplace=False)\n",
" )\n",
" (final_layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" )\n",
" )\n",
" )\n",
" )\n",
" (dropout): Dropout(p=0.0, inplace=False)\n",
" (lm_head): Linear(in_features=1280, out_features=36, bias=True)\n",
")"
]
},
"execution_count": 47,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from transformers import Wav2Vec2ForCTC\n",
"\n",
"model = Wav2Vec2ForCTC.from_pretrained(\n",
" \"wav2vec2-xlsr-fi-train-aug-lm-1B\", \n",
" attention_dropout=0.094,\n",
" hidden_dropout=0.047,\n",
" feat_proj_dropout=0.04,\n",
" mask_time_prob=0.082,\n",
" layerdrop=0.041,\n",
" activation_dropout=0.055,\n",
" ctc_loss_reduction=\"mean\", \n",
" pad_token_id=processor.tokenizer.pad_token_id,\n",
" vocab_size=len(processor.tokenizer),\n",
")\n",
"model.to('cuda')"
]
},
{
"cell_type": "code",
"execution_count": 163,
"metadata": {
"collapsed": true,
"id": "e7cqAWIayn6w",
"jupyter": {
"outputs_hidden": true
},
"outputId": "4f01d0c0-de3f-44b2-df05-b7c734f15448"
},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"Some weights of the model checkpoint at facebook/wav2vec2-xls-r-1b were not used when initializing Wav2Vec2ForCTC: ['project_q.weight', 'project_hid.weight', 'project_hid.bias', 'project_q.bias', 'quantizer.weight_proj.bias', 'quantizer.codevectors', 'quantizer.weight_proj.weight']\n",
"- This IS expected if you are initializing Wav2Vec2ForCTC from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).\n",
"- This IS NOT expected if you are initializing Wav2Vec2ForCTC from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).\n",
"Some weights of Wav2Vec2ForCTC were not initialized from the model checkpoint at facebook/wav2vec2-xls-r-1b and are newly initialized: ['lm_head.weight', 'lm_head.bias']\n",
"You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.\n"
]
},
{
"data": {
"text/plain": [
"Wav2Vec2ForCTC(\n",
" (wav2vec2): Wav2Vec2Model(\n",
" (feature_extractor): Wav2Vec2FeatureEncoder(\n",
" (conv_layers): ModuleList(\n",
" (0): Wav2Vec2LayerNormConvLayer(\n",
" (conv): Conv1d(1, 512, kernel_size=(10,), stride=(5,))\n",
" (layer_norm): LayerNorm((512,), eps=1e-05, elementwise_affine=True)\n",
" )\n",
" (1): Wav2Vec2LayerNormConvLayer(\n",
" (conv): Conv1d(512, 512, kernel_size=(3,), stride=(2,))\n",
" (layer_norm): LayerNorm((512,), eps=1e-05, elementwise_affine=True)\n",
" )\n",
" (2): Wav2Vec2LayerNormConvLayer(\n",
" (conv): Conv1d(512, 512, kernel_size=(3,), stride=(2,))\n",
" (layer_norm): LayerNorm((512,), eps=1e-05, elementwise_affine=True)\n",
" )\n",
" (3): Wav2Vec2LayerNormConvLayer(\n",
" (conv): Conv1d(512, 512, kernel_size=(3,), stride=(2,))\n",
" (layer_norm): LayerNorm((512,), eps=1e-05, elementwise_affine=True)\n",
" )\n",
" (4): Wav2Vec2LayerNormConvLayer(\n",
" (conv): Conv1d(512, 512, kernel_size=(3,), stride=(2,))\n",
" (layer_norm): LayerNorm((512,), eps=1e-05, elementwise_affine=True)\n",
" )\n",
" (5): Wav2Vec2LayerNormConvLayer(\n",
" (conv): Conv1d(512, 512, kernel_size=(2,), stride=(2,))\n",
" (layer_norm): LayerNorm((512,), eps=1e-05, elementwise_affine=True)\n",
" )\n",
" (6): Wav2Vec2LayerNormConvLayer(\n",
" (conv): Conv1d(512, 512, kernel_size=(2,), stride=(2,))\n",
" (layer_norm): LayerNorm((512,), eps=1e-05, elementwise_affine=True)\n",
" )\n",
" )\n",
" )\n",
" (feature_projection): Wav2Vec2FeatureProjection(\n",
" (layer_norm): LayerNorm((512,), eps=1e-05, elementwise_affine=True)\n",
" (projection): Linear(in_features=512, out_features=1280, bias=True)\n",
" (dropout): Dropout(p=0.04, inplace=False)\n",
" )\n",
" (encoder): Wav2Vec2EncoderStableLayerNorm(\n",
" (pos_conv_embed): Wav2Vec2PositionalConvEmbedding(\n",
" (conv): Conv1d(1280, 1280, kernel_size=(128,), stride=(1,), padding=(64,), groups=16)\n",
" (padding): Wav2Vec2SamePadLayer()\n",
" )\n",
" (layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" (dropout): Dropout(p=0.047, inplace=False)\n",
" (layers): ModuleList(\n",
" (0): Wav2Vec2EncoderLayerStableLayerNorm(\n",
" (attention): Wav2Vec2Attention(\n",
" (k_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (v_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (q_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (out_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" )\n",
" (dropout): Dropout(p=0.047, inplace=False)\n",
" (layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" (feed_forward): Wav2Vec2FeedForward(\n",
" (intermediate_dropout): Dropout(p=0.055, inplace=False)\n",
" (intermediate_dense): Linear(in_features=1280, out_features=5120, bias=True)\n",
" (output_dense): Linear(in_features=5120, out_features=1280, bias=True)\n",
" (output_dropout): Dropout(p=0.047, inplace=False)\n",
" )\n",
" (final_layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" )\n",
" (1): Wav2Vec2EncoderLayerStableLayerNorm(\n",
" (attention): Wav2Vec2Attention(\n",
" (k_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (v_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (q_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (out_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" )\n",
" (dropout): Dropout(p=0.047, inplace=False)\n",
" (layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" (feed_forward): Wav2Vec2FeedForward(\n",
" (intermediate_dropout): Dropout(p=0.055, inplace=False)\n",
" (intermediate_dense): Linear(in_features=1280, out_features=5120, bias=True)\n",
" (output_dense): Linear(in_features=5120, out_features=1280, bias=True)\n",
" (output_dropout): Dropout(p=0.047, inplace=False)\n",
" )\n",
" (final_layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" )\n",
" (2): Wav2Vec2EncoderLayerStableLayerNorm(\n",
" (attention): Wav2Vec2Attention(\n",
" (k_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (v_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (q_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (out_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" )\n",
" (dropout): Dropout(p=0.047, inplace=False)\n",
" (layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" (feed_forward): Wav2Vec2FeedForward(\n",
" (intermediate_dropout): Dropout(p=0.055, inplace=False)\n",
" (intermediate_dense): Linear(in_features=1280, out_features=5120, bias=True)\n",
" (output_dense): Linear(in_features=5120, out_features=1280, bias=True)\n",
" (output_dropout): Dropout(p=0.047, inplace=False)\n",
" )\n",
" (final_layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" )\n",
" (3): Wav2Vec2EncoderLayerStableLayerNorm(\n",
" (attention): Wav2Vec2Attention(\n",
" (k_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (v_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (q_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (out_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" )\n",
" (dropout): Dropout(p=0.047, inplace=False)\n",
" (layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" (feed_forward): Wav2Vec2FeedForward(\n",
" (intermediate_dropout): Dropout(p=0.055, inplace=False)\n",
" (intermediate_dense): Linear(in_features=1280, out_features=5120, bias=True)\n",
" (output_dense): Linear(in_features=5120, out_features=1280, bias=True)\n",
" (output_dropout): Dropout(p=0.047, inplace=False)\n",
" )\n",
" (final_layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" )\n",
" (4): Wav2Vec2EncoderLayerStableLayerNorm(\n",
" (attention): Wav2Vec2Attention(\n",
" (k_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (v_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (q_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (out_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" )\n",
" (dropout): Dropout(p=0.047, inplace=False)\n",
" (layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" (feed_forward): Wav2Vec2FeedForward(\n",
" (intermediate_dropout): Dropout(p=0.055, inplace=False)\n",
" (intermediate_dense): Linear(in_features=1280, out_features=5120, bias=True)\n",
" (output_dense): Linear(in_features=5120, out_features=1280, bias=True)\n",
" (output_dropout): Dropout(p=0.047, inplace=False)\n",
" )\n",
" (final_layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" )\n",
" (5): Wav2Vec2EncoderLayerStableLayerNorm(\n",
" (attention): Wav2Vec2Attention(\n",
" (k_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (v_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (q_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (out_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" )\n",
" (dropout): Dropout(p=0.047, inplace=False)\n",
" (layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" (feed_forward): Wav2Vec2FeedForward(\n",
" (intermediate_dropout): Dropout(p=0.055, inplace=False)\n",
" (intermediate_dense): Linear(in_features=1280, out_features=5120, bias=True)\n",
" (output_dense): Linear(in_features=5120, out_features=1280, bias=True)\n",
" (output_dropout): Dropout(p=0.047, inplace=False)\n",
" )\n",
" (final_layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" )\n",
" (6): Wav2Vec2EncoderLayerStableLayerNorm(\n",
" (attention): Wav2Vec2Attention(\n",
" (k_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (v_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (q_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (out_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" )\n",
" (dropout): Dropout(p=0.047, inplace=False)\n",
" (layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" (feed_forward): Wav2Vec2FeedForward(\n",
" (intermediate_dropout): Dropout(p=0.055, inplace=False)\n",
" (intermediate_dense): Linear(in_features=1280, out_features=5120, bias=True)\n",
" (output_dense): Linear(in_features=5120, out_features=1280, bias=True)\n",
" (output_dropout): Dropout(p=0.047, inplace=False)\n",
" )\n",
" (final_layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" )\n",
" (7): Wav2Vec2EncoderLayerStableLayerNorm(\n",
" (attention): Wav2Vec2Attention(\n",
" (k_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (v_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (q_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (out_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" )\n",
" (dropout): Dropout(p=0.047, inplace=False)\n",
" (layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" (feed_forward): Wav2Vec2FeedForward(\n",
" (intermediate_dropout): Dropout(p=0.055, inplace=False)\n",
" (intermediate_dense): Linear(in_features=1280, out_features=5120, bias=True)\n",
" (output_dense): Linear(in_features=5120, out_features=1280, bias=True)\n",
" (output_dropout): Dropout(p=0.047, inplace=False)\n",
" )\n",
" (final_layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" )\n",
" (8): Wav2Vec2EncoderLayerStableLayerNorm(\n",
" (attention): Wav2Vec2Attention(\n",
" (k_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (v_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (q_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (out_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" )\n",
" (dropout): Dropout(p=0.047, inplace=False)\n",
" (layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" (feed_forward): Wav2Vec2FeedForward(\n",
" (intermediate_dropout): Dropout(p=0.055, inplace=False)\n",
" (intermediate_dense): Linear(in_features=1280, out_features=5120, bias=True)\n",
" (output_dense): Linear(in_features=5120, out_features=1280, bias=True)\n",
" (output_dropout): Dropout(p=0.047, inplace=False)\n",
" )\n",
" (final_layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" )\n",
" (9): Wav2Vec2EncoderLayerStableLayerNorm(\n",
" (attention): Wav2Vec2Attention(\n",
" (k_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (v_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (q_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (out_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" )\n",
" (dropout): Dropout(p=0.047, inplace=False)\n",
" (layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" (feed_forward): Wav2Vec2FeedForward(\n",
" (intermediate_dropout): Dropout(p=0.055, inplace=False)\n",
" (intermediate_dense): Linear(in_features=1280, out_features=5120, bias=True)\n",
" (output_dense): Linear(in_features=5120, out_features=1280, bias=True)\n",
" (output_dropout): Dropout(p=0.047, inplace=False)\n",
" )\n",
" (final_layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" )\n",
" (10): Wav2Vec2EncoderLayerStableLayerNorm(\n",
" (attention): Wav2Vec2Attention(\n",
" (k_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (v_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (q_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (out_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" )\n",
" (dropout): Dropout(p=0.047, inplace=False)\n",
" (layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" (feed_forward): Wav2Vec2FeedForward(\n",
" (intermediate_dropout): Dropout(p=0.055, inplace=False)\n",
" (intermediate_dense): Linear(in_features=1280, out_features=5120, bias=True)\n",
" (output_dense): Linear(in_features=5120, out_features=1280, bias=True)\n",
" (output_dropout): Dropout(p=0.047, inplace=False)\n",
" )\n",
" (final_layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" )\n",
" (11): Wav2Vec2EncoderLayerStableLayerNorm(\n",
" (attention): Wav2Vec2Attention(\n",
" (k_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (v_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (q_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (out_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" )\n",
" (dropout): Dropout(p=0.047, inplace=False)\n",
" (layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" (feed_forward): Wav2Vec2FeedForward(\n",
" (intermediate_dropout): Dropout(p=0.055, inplace=False)\n",
" (intermediate_dense): Linear(in_features=1280, out_features=5120, bias=True)\n",
" (output_dense): Linear(in_features=5120, out_features=1280, bias=True)\n",
" (output_dropout): Dropout(p=0.047, inplace=False)\n",
" )\n",
" (final_layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" )\n",
" (12): Wav2Vec2EncoderLayerStableLayerNorm(\n",
" (attention): Wav2Vec2Attention(\n",
" (k_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (v_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (q_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (out_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" )\n",
" (dropout): Dropout(p=0.047, inplace=False)\n",
" (layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" (feed_forward): Wav2Vec2FeedForward(\n",
" (intermediate_dropout): Dropout(p=0.055, inplace=False)\n",
" (intermediate_dense): Linear(in_features=1280, out_features=5120, bias=True)\n",
" (output_dense): Linear(in_features=5120, out_features=1280, bias=True)\n",
" (output_dropout): Dropout(p=0.047, inplace=False)\n",
" )\n",
" (final_layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" )\n",
" (13): Wav2Vec2EncoderLayerStableLayerNorm(\n",
" (attention): Wav2Vec2Attention(\n",
" (k_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (v_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (q_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (out_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" )\n",
" (dropout): Dropout(p=0.047, inplace=False)\n",
" (layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" (feed_forward): Wav2Vec2FeedForward(\n",
" (intermediate_dropout): Dropout(p=0.055, inplace=False)\n",
" (intermediate_dense): Linear(in_features=1280, out_features=5120, bias=True)\n",
" (output_dense): Linear(in_features=5120, out_features=1280, bias=True)\n",
" (output_dropout): Dropout(p=0.047, inplace=False)\n",
" )\n",
" (final_layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" )\n",
" (14): Wav2Vec2EncoderLayerStableLayerNorm(\n",
" (attention): Wav2Vec2Attention(\n",
" (k_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (v_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (q_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (out_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" )\n",
" (dropout): Dropout(p=0.047, inplace=False)\n",
" (layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" (feed_forward): Wav2Vec2FeedForward(\n",
" (intermediate_dropout): Dropout(p=0.055, inplace=False)\n",
" (intermediate_dense): Linear(in_features=1280, out_features=5120, bias=True)\n",
" (output_dense): Linear(in_features=5120, out_features=1280, bias=True)\n",
" (output_dropout): Dropout(p=0.047, inplace=False)\n",
" )\n",
" (final_layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" )\n",
" (15): Wav2Vec2EncoderLayerStableLayerNorm(\n",
" (attention): Wav2Vec2Attention(\n",
" (k_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (v_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (q_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (out_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" )\n",
" (dropout): Dropout(p=0.047, inplace=False)\n",
" (layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" (feed_forward): Wav2Vec2FeedForward(\n",
" (intermediate_dropout): Dropout(p=0.055, inplace=False)\n",
" (intermediate_dense): Linear(in_features=1280, out_features=5120, bias=True)\n",
" (output_dense): Linear(in_features=5120, out_features=1280, bias=True)\n",
" (output_dropout): Dropout(p=0.047, inplace=False)\n",
" )\n",
" (final_layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" )\n",
" (16): Wav2Vec2EncoderLayerStableLayerNorm(\n",
" (attention): Wav2Vec2Attention(\n",
" (k_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (v_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (q_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (out_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" )\n",
" (dropout): Dropout(p=0.047, inplace=False)\n",
" (layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" (feed_forward): Wav2Vec2FeedForward(\n",
" (intermediate_dropout): Dropout(p=0.055, inplace=False)\n",
" (intermediate_dense): Linear(in_features=1280, out_features=5120, bias=True)\n",
" (output_dense): Linear(in_features=5120, out_features=1280, bias=True)\n",
" (output_dropout): Dropout(p=0.047, inplace=False)\n",
" )\n",
" (final_layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" )\n",
" (17): Wav2Vec2EncoderLayerStableLayerNorm(\n",
" (attention): Wav2Vec2Attention(\n",
" (k_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (v_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (q_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (out_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" )\n",
" (dropout): Dropout(p=0.047, inplace=False)\n",
" (layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" (feed_forward): Wav2Vec2FeedForward(\n",
" (intermediate_dropout): Dropout(p=0.055, inplace=False)\n",
" (intermediate_dense): Linear(in_features=1280, out_features=5120, bias=True)\n",
" (output_dense): Linear(in_features=5120, out_features=1280, bias=True)\n",
" (output_dropout): Dropout(p=0.047, inplace=False)\n",
" )\n",
" (final_layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" )\n",
" (18): Wav2Vec2EncoderLayerStableLayerNorm(\n",
" (attention): Wav2Vec2Attention(\n",
" (k_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (v_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (q_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (out_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" )\n",
" (dropout): Dropout(p=0.047, inplace=False)\n",
" (layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" (feed_forward): Wav2Vec2FeedForward(\n",
" (intermediate_dropout): Dropout(p=0.055, inplace=False)\n",
" (intermediate_dense): Linear(in_features=1280, out_features=5120, bias=True)\n",
" (output_dense): Linear(in_features=5120, out_features=1280, bias=True)\n",
" (output_dropout): Dropout(p=0.047, inplace=False)\n",
" )\n",
" (final_layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" )\n",
" (19): Wav2Vec2EncoderLayerStableLayerNorm(\n",
" (attention): Wav2Vec2Attention(\n",
" (k_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (v_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (q_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (out_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" )\n",
" (dropout): Dropout(p=0.047, inplace=False)\n",
" (layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" (feed_forward): Wav2Vec2FeedForward(\n",
" (intermediate_dropout): Dropout(p=0.055, inplace=False)\n",
" (intermediate_dense): Linear(in_features=1280, out_features=5120, bias=True)\n",
" (output_dense): Linear(in_features=5120, out_features=1280, bias=True)\n",
" (output_dropout): Dropout(p=0.047, inplace=False)\n",
" )\n",
" (final_layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" )\n",
" (20): Wav2Vec2EncoderLayerStableLayerNorm(\n",
" (attention): Wav2Vec2Attention(\n",
" (k_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (v_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (q_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (out_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" )\n",
" (dropout): Dropout(p=0.047, inplace=False)\n",
" (layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" (feed_forward): Wav2Vec2FeedForward(\n",
" (intermediate_dropout): Dropout(p=0.055, inplace=False)\n",
" (intermediate_dense): Linear(in_features=1280, out_features=5120, bias=True)\n",
" (output_dense): Linear(in_features=5120, out_features=1280, bias=True)\n",
" (output_dropout): Dropout(p=0.047, inplace=False)\n",
" )\n",
" (final_layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" )\n",
" (21): Wav2Vec2EncoderLayerStableLayerNorm(\n",
" (attention): Wav2Vec2Attention(\n",
" (k_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (v_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (q_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (out_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" )\n",
" (dropout): Dropout(p=0.047, inplace=False)\n",
" (layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" (feed_forward): Wav2Vec2FeedForward(\n",
" (intermediate_dropout): Dropout(p=0.055, inplace=False)\n",
" (intermediate_dense): Linear(in_features=1280, out_features=5120, bias=True)\n",
" (output_dense): Linear(in_features=5120, out_features=1280, bias=True)\n",
" (output_dropout): Dropout(p=0.047, inplace=False)\n",
" )\n",
" (final_layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" )\n",
" (22): Wav2Vec2EncoderLayerStableLayerNorm(\n",
" (attention): Wav2Vec2Attention(\n",
" (k_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (v_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (q_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (out_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" )\n",
" (dropout): Dropout(p=0.047, inplace=False)\n",
" (layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" (feed_forward): Wav2Vec2FeedForward(\n",
" (intermediate_dropout): Dropout(p=0.055, inplace=False)\n",
" (intermediate_dense): Linear(in_features=1280, out_features=5120, bias=True)\n",
" (output_dense): Linear(in_features=5120, out_features=1280, bias=True)\n",
" (output_dropout): Dropout(p=0.047, inplace=False)\n",
" )\n",
" (final_layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" )\n",
" (23): Wav2Vec2EncoderLayerStableLayerNorm(\n",
" (attention): Wav2Vec2Attention(\n",
" (k_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (v_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (q_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (out_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" )\n",
" (dropout): Dropout(p=0.047, inplace=False)\n",
" (layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" (feed_forward): Wav2Vec2FeedForward(\n",
" (intermediate_dropout): Dropout(p=0.055, inplace=False)\n",
" (intermediate_dense): Linear(in_features=1280, out_features=5120, bias=True)\n",
" (output_dense): Linear(in_features=5120, out_features=1280, bias=True)\n",
" (output_dropout): Dropout(p=0.047, inplace=False)\n",
" )\n",
" (final_layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" )\n",
" (24): Wav2Vec2EncoderLayerStableLayerNorm(\n",
" (attention): Wav2Vec2Attention(\n",
" (k_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (v_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (q_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (out_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" )\n",
" (dropout): Dropout(p=0.047, inplace=False)\n",
" (layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" (feed_forward): Wav2Vec2FeedForward(\n",
" (intermediate_dropout): Dropout(p=0.055, inplace=False)\n",
" (intermediate_dense): Linear(in_features=1280, out_features=5120, bias=True)\n",
" (output_dense): Linear(in_features=5120, out_features=1280, bias=True)\n",
" (output_dropout): Dropout(p=0.047, inplace=False)\n",
" )\n",
" (final_layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" )\n",
" (25): Wav2Vec2EncoderLayerStableLayerNorm(\n",
" (attention): Wav2Vec2Attention(\n",
" (k_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (v_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (q_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (out_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" )\n",
" (dropout): Dropout(p=0.047, inplace=False)\n",
" (layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" (feed_forward): Wav2Vec2FeedForward(\n",
" (intermediate_dropout): Dropout(p=0.055, inplace=False)\n",
" (intermediate_dense): Linear(in_features=1280, out_features=5120, bias=True)\n",
" (output_dense): Linear(in_features=5120, out_features=1280, bias=True)\n",
" (output_dropout): Dropout(p=0.047, inplace=False)\n",
" )\n",
" (final_layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" )\n",
" (26): Wav2Vec2EncoderLayerStableLayerNorm(\n",
" (attention): Wav2Vec2Attention(\n",
" (k_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (v_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (q_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (out_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" )\n",
" (dropout): Dropout(p=0.047, inplace=False)\n",
" (layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" (feed_forward): Wav2Vec2FeedForward(\n",
" (intermediate_dropout): Dropout(p=0.055, inplace=False)\n",
" (intermediate_dense): Linear(in_features=1280, out_features=5120, bias=True)\n",
" (output_dense): Linear(in_features=5120, out_features=1280, bias=True)\n",
" (output_dropout): Dropout(p=0.047, inplace=False)\n",
" )\n",
" (final_layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" )\n",
" (27): Wav2Vec2EncoderLayerStableLayerNorm(\n",
" (attention): Wav2Vec2Attention(\n",
" (k_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (v_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (q_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (out_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" )\n",
" (dropout): Dropout(p=0.047, inplace=False)\n",
" (layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" (feed_forward): Wav2Vec2FeedForward(\n",
" (intermediate_dropout): Dropout(p=0.055, inplace=False)\n",
" (intermediate_dense): Linear(in_features=1280, out_features=5120, bias=True)\n",
" (output_dense): Linear(in_features=5120, out_features=1280, bias=True)\n",
" (output_dropout): Dropout(p=0.047, inplace=False)\n",
" )\n",
" (final_layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" )\n",
" (28): Wav2Vec2EncoderLayerStableLayerNorm(\n",
" (attention): Wav2Vec2Attention(\n",
" (k_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (v_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (q_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (out_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" )\n",
" (dropout): Dropout(p=0.047, inplace=False)\n",
" (layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" (feed_forward): Wav2Vec2FeedForward(\n",
" (intermediate_dropout): Dropout(p=0.055, inplace=False)\n",
" (intermediate_dense): Linear(in_features=1280, out_features=5120, bias=True)\n",
" (output_dense): Linear(in_features=5120, out_features=1280, bias=True)\n",
" (output_dropout): Dropout(p=0.047, inplace=False)\n",
" )\n",
" (final_layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" )\n",
" (29): Wav2Vec2EncoderLayerStableLayerNorm(\n",
" (attention): Wav2Vec2Attention(\n",
" (k_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (v_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (q_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (out_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" )\n",
" (dropout): Dropout(p=0.047, inplace=False)\n",
" (layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" (feed_forward): Wav2Vec2FeedForward(\n",
" (intermediate_dropout): Dropout(p=0.055, inplace=False)\n",
" (intermediate_dense): Linear(in_features=1280, out_features=5120, bias=True)\n",
" (output_dense): Linear(in_features=5120, out_features=1280, bias=True)\n",
" (output_dropout): Dropout(p=0.047, inplace=False)\n",
" )\n",
" (final_layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" )\n",
" (30): Wav2Vec2EncoderLayerStableLayerNorm(\n",
" (attention): Wav2Vec2Attention(\n",
" (k_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (v_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (q_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (out_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" )\n",
" (dropout): Dropout(p=0.047, inplace=False)\n",
" (layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" (feed_forward): Wav2Vec2FeedForward(\n",
" (intermediate_dropout): Dropout(p=0.055, inplace=False)\n",
" (intermediate_dense): Linear(in_features=1280, out_features=5120, bias=True)\n",
" (output_dense): Linear(in_features=5120, out_features=1280, bias=True)\n",
" (output_dropout): Dropout(p=0.047, inplace=False)\n",
" )\n",
" (final_layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" )\n",
" (31): Wav2Vec2EncoderLayerStableLayerNorm(\n",
" (attention): Wav2Vec2Attention(\n",
" (k_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (v_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (q_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (out_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" )\n",
" (dropout): Dropout(p=0.047, inplace=False)\n",
" (layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" (feed_forward): Wav2Vec2FeedForward(\n",
" (intermediate_dropout): Dropout(p=0.055, inplace=False)\n",
" (intermediate_dense): Linear(in_features=1280, out_features=5120, bias=True)\n",
" (output_dense): Linear(in_features=5120, out_features=1280, bias=True)\n",
" (output_dropout): Dropout(p=0.047, inplace=False)\n",
" )\n",
" (final_layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" )\n",
" (32): Wav2Vec2EncoderLayerStableLayerNorm(\n",
" (attention): Wav2Vec2Attention(\n",
" (k_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (v_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (q_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (out_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" )\n",
" (dropout): Dropout(p=0.047, inplace=False)\n",
" (layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" (feed_forward): Wav2Vec2FeedForward(\n",
" (intermediate_dropout): Dropout(p=0.055, inplace=False)\n",
" (intermediate_dense): Linear(in_features=1280, out_features=5120, bias=True)\n",
" (output_dense): Linear(in_features=5120, out_features=1280, bias=True)\n",
" (output_dropout): Dropout(p=0.047, inplace=False)\n",
" )\n",
" (final_layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" )\n",
" (33): Wav2Vec2EncoderLayerStableLayerNorm(\n",
" (attention): Wav2Vec2Attention(\n",
" (k_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (v_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (q_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (out_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" )\n",
" (dropout): Dropout(p=0.047, inplace=False)\n",
" (layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" (feed_forward): Wav2Vec2FeedForward(\n",
" (intermediate_dropout): Dropout(p=0.055, inplace=False)\n",
" (intermediate_dense): Linear(in_features=1280, out_features=5120, bias=True)\n",
" (output_dense): Linear(in_features=5120, out_features=1280, bias=True)\n",
" (output_dropout): Dropout(p=0.047, inplace=False)\n",
" )\n",
" (final_layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" )\n",
" (34): Wav2Vec2EncoderLayerStableLayerNorm(\n",
" (attention): Wav2Vec2Attention(\n",
" (k_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (v_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (q_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (out_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" )\n",
" (dropout): Dropout(p=0.047, inplace=False)\n",
" (layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" (feed_forward): Wav2Vec2FeedForward(\n",
" (intermediate_dropout): Dropout(p=0.055, inplace=False)\n",
" (intermediate_dense): Linear(in_features=1280, out_features=5120, bias=True)\n",
" (output_dense): Linear(in_features=5120, out_features=1280, bias=True)\n",
" (output_dropout): Dropout(p=0.047, inplace=False)\n",
" )\n",
" (final_layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" )\n",
" (35): Wav2Vec2EncoderLayerStableLayerNorm(\n",
" (attention): Wav2Vec2Attention(\n",
" (k_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (v_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (q_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (out_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" )\n",
" (dropout): Dropout(p=0.047, inplace=False)\n",
" (layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" (feed_forward): Wav2Vec2FeedForward(\n",
" (intermediate_dropout): Dropout(p=0.055, inplace=False)\n",
" (intermediate_dense): Linear(in_features=1280, out_features=5120, bias=True)\n",
" (output_dense): Linear(in_features=5120, out_features=1280, bias=True)\n",
" (output_dropout): Dropout(p=0.047, inplace=False)\n",
" )\n",
" (final_layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" )\n",
" (36): Wav2Vec2EncoderLayerStableLayerNorm(\n",
" (attention): Wav2Vec2Attention(\n",
" (k_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (v_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (q_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (out_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" )\n",
" (dropout): Dropout(p=0.047, inplace=False)\n",
" (layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" (feed_forward): Wav2Vec2FeedForward(\n",
" (intermediate_dropout): Dropout(p=0.055, inplace=False)\n",
" (intermediate_dense): Linear(in_features=1280, out_features=5120, bias=True)\n",
" (output_dense): Linear(in_features=5120, out_features=1280, bias=True)\n",
" (output_dropout): Dropout(p=0.047, inplace=False)\n",
" )\n",
" (final_layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" )\n",
" (37): Wav2Vec2EncoderLayerStableLayerNorm(\n",
" (attention): Wav2Vec2Attention(\n",
" (k_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (v_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (q_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (out_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" )\n",
" (dropout): Dropout(p=0.047, inplace=False)\n",
" (layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" (feed_forward): Wav2Vec2FeedForward(\n",
" (intermediate_dropout): Dropout(p=0.055, inplace=False)\n",
" (intermediate_dense): Linear(in_features=1280, out_features=5120, bias=True)\n",
" (output_dense): Linear(in_features=5120, out_features=1280, bias=True)\n",
" (output_dropout): Dropout(p=0.047, inplace=False)\n",
" )\n",
" (final_layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" )\n",
" (38): Wav2Vec2EncoderLayerStableLayerNorm(\n",
" (attention): Wav2Vec2Attention(\n",
" (k_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (v_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (q_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (out_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" )\n",
" (dropout): Dropout(p=0.047, inplace=False)\n",
" (layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" (feed_forward): Wav2Vec2FeedForward(\n",
" (intermediate_dropout): Dropout(p=0.055, inplace=False)\n",
" (intermediate_dense): Linear(in_features=1280, out_features=5120, bias=True)\n",
" (output_dense): Linear(in_features=5120, out_features=1280, bias=True)\n",
" (output_dropout): Dropout(p=0.047, inplace=False)\n",
" )\n",
" (final_layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" )\n",
" (39): Wav2Vec2EncoderLayerStableLayerNorm(\n",
" (attention): Wav2Vec2Attention(\n",
" (k_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (v_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (q_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (out_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" )\n",
" (dropout): Dropout(p=0.047, inplace=False)\n",
" (layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" (feed_forward): Wav2Vec2FeedForward(\n",
" (intermediate_dropout): Dropout(p=0.055, inplace=False)\n",
" (intermediate_dense): Linear(in_features=1280, out_features=5120, bias=True)\n",
" (output_dense): Linear(in_features=5120, out_features=1280, bias=True)\n",
" (output_dropout): Dropout(p=0.047, inplace=False)\n",
" )\n",
" (final_layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" )\n",
" (40): Wav2Vec2EncoderLayerStableLayerNorm(\n",
" (attention): Wav2Vec2Attention(\n",
" (k_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (v_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (q_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (out_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" )\n",
" (dropout): Dropout(p=0.047, inplace=False)\n",
" (layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" (feed_forward): Wav2Vec2FeedForward(\n",
" (intermediate_dropout): Dropout(p=0.055, inplace=False)\n",
" (intermediate_dense): Linear(in_features=1280, out_features=5120, bias=True)\n",
" (output_dense): Linear(in_features=5120, out_features=1280, bias=True)\n",
" (output_dropout): Dropout(p=0.047, inplace=False)\n",
" )\n",
" (final_layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" )\n",
" (41): Wav2Vec2EncoderLayerStableLayerNorm(\n",
" (attention): Wav2Vec2Attention(\n",
" (k_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (v_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (q_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (out_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" )\n",
" (dropout): Dropout(p=0.047, inplace=False)\n",
" (layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" (feed_forward): Wav2Vec2FeedForward(\n",
" (intermediate_dropout): Dropout(p=0.055, inplace=False)\n",
" (intermediate_dense): Linear(in_features=1280, out_features=5120, bias=True)\n",
" (output_dense): Linear(in_features=5120, out_features=1280, bias=True)\n",
" (output_dropout): Dropout(p=0.047, inplace=False)\n",
" )\n",
" (final_layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" )\n",
" (42): Wav2Vec2EncoderLayerStableLayerNorm(\n",
" (attention): Wav2Vec2Attention(\n",
" (k_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (v_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (q_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (out_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" )\n",
" (dropout): Dropout(p=0.047, inplace=False)\n",
" (layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" (feed_forward): Wav2Vec2FeedForward(\n",
" (intermediate_dropout): Dropout(p=0.055, inplace=False)\n",
" (intermediate_dense): Linear(in_features=1280, out_features=5120, bias=True)\n",
" (output_dense): Linear(in_features=5120, out_features=1280, bias=True)\n",
" (output_dropout): Dropout(p=0.047, inplace=False)\n",
" )\n",
" (final_layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" )\n",
" (43): Wav2Vec2EncoderLayerStableLayerNorm(\n",
" (attention): Wav2Vec2Attention(\n",
" (k_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (v_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (q_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (out_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" )\n",
" (dropout): Dropout(p=0.047, inplace=False)\n",
" (layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" (feed_forward): Wav2Vec2FeedForward(\n",
" (intermediate_dropout): Dropout(p=0.055, inplace=False)\n",
" (intermediate_dense): Linear(in_features=1280, out_features=5120, bias=True)\n",
" (output_dense): Linear(in_features=5120, out_features=1280, bias=True)\n",
" (output_dropout): Dropout(p=0.047, inplace=False)\n",
" )\n",
" (final_layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" )\n",
" (44): Wav2Vec2EncoderLayerStableLayerNorm(\n",
" (attention): Wav2Vec2Attention(\n",
" (k_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (v_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (q_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (out_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" )\n",
" (dropout): Dropout(p=0.047, inplace=False)\n",
" (layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" (feed_forward): Wav2Vec2FeedForward(\n",
" (intermediate_dropout): Dropout(p=0.055, inplace=False)\n",
" (intermediate_dense): Linear(in_features=1280, out_features=5120, bias=True)\n",
" (output_dense): Linear(in_features=5120, out_features=1280, bias=True)\n",
" (output_dropout): Dropout(p=0.047, inplace=False)\n",
" )\n",
" (final_layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" )\n",
" (45): Wav2Vec2EncoderLayerStableLayerNorm(\n",
" (attention): Wav2Vec2Attention(\n",
" (k_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (v_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (q_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (out_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" )\n",
" (dropout): Dropout(p=0.047, inplace=False)\n",
" (layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" (feed_forward): Wav2Vec2FeedForward(\n",
" (intermediate_dropout): Dropout(p=0.055, inplace=False)\n",
" (intermediate_dense): Linear(in_features=1280, out_features=5120, bias=True)\n",
" (output_dense): Linear(in_features=5120, out_features=1280, bias=True)\n",
" (output_dropout): Dropout(p=0.047, inplace=False)\n",
" )\n",
" (final_layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" )\n",
" (46): Wav2Vec2EncoderLayerStableLayerNorm(\n",
" (attention): Wav2Vec2Attention(\n",
" (k_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (v_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (q_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (out_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" )\n",
" (dropout): Dropout(p=0.047, inplace=False)\n",
" (layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" (feed_forward): Wav2Vec2FeedForward(\n",
" (intermediate_dropout): Dropout(p=0.055, inplace=False)\n",
" (intermediate_dense): Linear(in_features=1280, out_features=5120, bias=True)\n",
" (output_dense): Linear(in_features=5120, out_features=1280, bias=True)\n",
" (output_dropout): Dropout(p=0.047, inplace=False)\n",
" )\n",
" (final_layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" )\n",
" (47): Wav2Vec2EncoderLayerStableLayerNorm(\n",
" (attention): Wav2Vec2Attention(\n",
" (k_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (v_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (q_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" (out_proj): Linear(in_features=1280, out_features=1280, bias=True)\n",
" )\n",
" (dropout): Dropout(p=0.047, inplace=False)\n",
" (layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" (feed_forward): Wav2Vec2FeedForward(\n",
" (intermediate_dropout): Dropout(p=0.055, inplace=False)\n",
" (intermediate_dense): Linear(in_features=1280, out_features=5120, bias=True)\n",
" (output_dense): Linear(in_features=5120, out_features=1280, bias=True)\n",
" (output_dropout): Dropout(p=0.047, inplace=False)\n",
" )\n",
" (final_layer_norm): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)\n",
" )\n",
" )\n",
" )\n",
" )\n",
" (dropout): Dropout(p=0.0, inplace=False)\n",
" (lm_head): Linear(in_features=1280, out_features=36, bias=True)\n",
")"
]
},
"execution_count": 163,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from transformers import Wav2Vec2ForCTC\n",
"\n",
"model = Wav2Vec2ForCTC.from_pretrained(\n",
" \"facebook/wav2vec2-xls-r-1b\", \n",
" attention_dropout=0.094,\n",
" hidden_dropout=0.047,\n",
" feat_proj_dropout=0.04,\n",
" mask_time_prob=0.082,\n",
" layerdrop=0.041,\n",
" activation_dropout=0.055,\n",
" ctc_loss_reduction=\"mean\", \n",
" pad_token_id=processor.tokenizer.pad_token_id,\n",
" vocab_size=len(processor.tokenizer),\n",
")\n",
"model.to('cuda')"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "1DwR3XLSzGDD"
},
"source": [
"The first component of XLS-R consists of a stack of CNN layers that are used to extract acoustically meaningful - but contextually independent - features from the raw speech signal. This part of the model has already been sufficiently trained during pretraining and as stated in the [paper](https://arxiv.org/pdf/2006.13979.pdf) does not need to be fine-tuned anymore. \n",
"Thus, we can set the `requires_grad` to `False` for all parameters of the *feature extraction* part."
]
},
{
"cell_type": "code",
"execution_count": 48,
"metadata": {
"id": "oGI8zObtZ3V0"
},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"/opt/conda/lib/python3.8/site-packages/transformers/models/wav2vec2/modeling_wav2vec2.py:1680: FutureWarning: The method `freeze_feature_extractor` is deprecated and will be removed in Transformers v5.Please use the equivalent `freeze_feature_encoder` method instead.\n",
" warnings.warn(\n"
]
}
],
"source": [
"model.freeze_feature_extractor()"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "lD4aGhQM0K-D"
},
"source": [
"In a final step, we define all parameters related to training. \n",
"To give more explanation on some of the parameters:\n",
"- `group_by_length` makes training more efficient by grouping training samples of similar input length into one batch. This can significantly speed up training time by heavily reducing the overall number of useless padding tokens that are passed through the model\n",
"- `learning_rate` and `weight_decay` were heuristically tuned until fine-tuning has become stable. Note that those parameters strongly depend on the Common Voice dataset and might be suboptimal for other speech datasets.\n",
"\n",
"For more explanations on other parameters, one can take a look at the [docs](https://huggingface.co/transformers/master/main_classes/trainer.html?highlight=trainer#trainingarguments).\n",
"\n",
"During training, a checkpoint will be uploaded asynchronously to the hub every 400 training steps. It allows you to also play around with the demo widget even while your model is still training.\n",
"\n",
"**Note**: If one does not want to upload the model checkpoints to the hub, simply set `push_to_hub=False`."
]
},
{
"cell_type": "code",
"execution_count": 49,
"metadata": {},
"outputs": [],
"source": [
"os.environ[\"WANDB_DISABLED\"] = \"true\""
]
},
{
"cell_type": "code",
"execution_count": 50,
"metadata": {
"id": "KbeKSV7uzGPP"
},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"Using the `WAND_DISABLED` environment variable is deprecated and will be removed in v5. Use the --report_to flag to control the integrations used for logging result (for instance --report_to none).\n"
]
}
],
"source": [
"from transformers import TrainingArguments\n",
"\n",
"training_args = TrainingArguments(\n",
" output_dir=repo_name,\n",
" group_by_length=True,\n",
" per_device_train_batch_size=8,\n",
" gradient_accumulation_steps=2,\n",
" evaluation_strategy=\"steps\",\n",
" num_train_epochs=4,\n",
" gradient_checkpointing=True,\n",
" fp16=True,\n",
" save_steps=400,\n",
" eval_steps=400,\n",
" logging_steps=50,\n",
" learning_rate=1e-4,\n",
" warmup_steps=100,\n",
" save_total_limit=3,\n",
" push_to_hub=True,\n",
" load_best_model_at_end=True,\n",
" greater_is_better=False,\n",
" metric_for_best_model='eval_wer',\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "OsW-WZcL1ZtN"
},
"source": [
"Now, all instances can be passed to Trainer and we are ready to start training!"
]
},
{
"cell_type": "code",
"execution_count": 51,
"metadata": {
"id": "rY7vBmFCPFgC",
"outputId": "441e7019-a5a1-4c0a-afa2-cced90ffed05"
},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"/workspace/xlsr_fi/wav2vec2-xlsr-fi-train-aug-lm-1B is already a clone of https://huggingface.co/RASMUS/wav2vec2-xlsr-fi-train-aug-lm-1B. Make sure you pull the latest changes with `repo.git_pull()`.\n",
"Using amp half precision backend\n"
]
}
],
"source": [
"from transformers import Trainer\n",
"\n",
"trainer = Trainer(\n",
" model=model,\n",
" data_collator=data_collator,\n",
" args=training_args,\n",
" compute_metrics=compute_metrics,\n",
" train_dataset=common_voice_train_audio_augmented,\n",
" eval_dataset=common_voice_test_audio,\n",
" tokenizer=processor.feature_extractor,\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "UoXBx1JAA0DX"
},
"source": [
"\n",
"\n",
"---\n",
"\n",
"${}^1$ To allow models to become independent of the speaker rate, in CTC, consecutive tokens that are identical are simply grouped as a single token. However, the encoded labels should not be grouped when decoding since they don't correspond to the predicted tokens of the model, which is why the `group_tokens=False` parameter has to be passed. If we wouldn't pass this parameter a word like `\"hello\"` would incorrectly be encoded, and decoded as `\"helo\"`.\n",
"\n",
"${}^2$ The blank token allows the model to predict a word, such as `\"hello\"` by forcing it to insert the blank token between the two l's. A CTC-conform prediction of `\"hello\"` of our model would be `[PAD] [PAD] \"h\" \"e\" \"e\" \"l\" \"l\" [PAD] \"l\" \"o\" \"o\" [PAD]`."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "rpvZHM1xReIW"
},
"source": [
"### Training"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "j-3oKSzZ1hGq"
},
"source": [
"Training will take multiple hours depending on the GPU allocated to this notebook. While the trained model yields somewhat satisfying results on *Common Voice*'s test data of Turkish, it is by no means an optimally fine-tuned model. The purpose of this notebook is just to demonstrate how to fine-tune XLS-R on an ASR dataset.\n",
"\n",
"In case you want to use this google colab to fine-tune your model, you should make sure that your training doesn't stop due to inactivity. A simple hack to prevent this is to paste the following code into the console of this tab (*right mouse click -> inspect -> Console tab and insert code*)."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "VYYAvgkW4P0m"
},
"source": [
"```javascript\n",
"function ConnectButton(){\n",
" console.log(\"Connect pushed\"); \n",
" document.querySelector(\"#top-toolbar > colab-connect-button\").shadowRoot.querySelector(\"#connect\").click() \n",
"}\n",
"setInterval(ConnectButton,60000);\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "7bGgLV2r0yvZ"
},
"source": [
"Depending on what GPU was allocated to your google colab it might be possible that you are seeing an `\"out-of-memory\"` error here. In this case, it's probably best to reduce `per_device_train_batch_size` to 8 or even less and increase [`gradient_accumulation`](https://huggingface.co/transformers/master/main_classes/trainer.html#trainingarguments)."
]
},
{
"cell_type": "code",
"execution_count": 52,
"metadata": {
"id": "9fRr9TG5pGBl",
"outputId": "e07eeffc-bbd6-4f1d-a45e-b74bad290f79"
},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"The following columns in the training set don't have a corresponding argument in `Wav2Vec2ForCTC.forward` and have been ignored: input_length.\n",
"/opt/conda/lib/python3.8/site-packages/transformers/optimization.py:306: FutureWarning: This implementation of AdamW is deprecated and will be removed in a future version. Use thePyTorch implementation torch.optim.AdamW instead, or set `no_deprecation_warning=True` to disable this warning\n",
" warnings.warn(\n",
"***** Running training *****\n",
" Num examples = 22174\n",
" Num Epochs = 4\n",
" Instantaneous batch size per device = 8\n",
" Total train batch size (w. parallel, distributed & accumulation) = 16\n",
" Gradient Accumulation steps = 2\n",
" Total optimization steps = 5544\n"
]
},
{
"data": {
"text/html": [
"\n",
" \n",
" \n",
"
\n",
" [5544/5544 4:21:28, Epoch 4/4]\n",
"
\n",
" \n",
" \n",
" \n",
" Step | \n",
" Training Loss | \n",
" Validation Loss | \n",
" Wer | \n",
"
\n",
" \n",
" \n",
" \n",
" 400 | \n",
" 0.647300 | \n",
" 0.285699 | \n",
" 0.382526 | \n",
"
\n",
" \n",
" 800 | \n",
" 0.603900 | \n",
" 0.245895 | \n",
" 0.347557 | \n",
"
\n",
" \n",
" 1200 | \n",
" 0.475700 | \n",
" 0.233797 | \n",
" 0.327387 | \n",
"
\n",
" \n",
" 1600 | \n",
" 0.447300 | \n",
" 0.224628 | \n",
" 0.312791 | \n",
"
\n",
" \n",
" 2000 | \n",
" 0.432200 | \n",
" 0.196223 | \n",
" 0.280458 | \n",
"
\n",
" \n",
" 2400 | \n",
" 0.396100 | \n",
" 0.206987 | \n",
" 0.279749 | \n",
"
\n",
" \n",
" 2800 | \n",
" 0.364200 | \n",
" 0.178981 | \n",
" 0.247314 | \n",
"
\n",
" \n",
" 3200 | \n",
" 0.356100 | \n",
" 0.176911 | \n",
" 0.237482 | \n",
"
\n",
" \n",
" 3600 | \n",
" 0.282000 | \n",
" 0.167229 | \n",
" 0.226333 | \n",
"
\n",
" \n",
" 4000 | \n",
" 0.297800 | \n",
" 0.163568 | \n",
" 0.219238 | \n",
"
\n",
" \n",
" 4400 | \n",
" 0.272200 | \n",
" 0.163688 | \n",
" 0.210217 | \n",
"
\n",
" \n",
" 4800 | \n",
" 0.292400 | \n",
" 0.150580 | \n",
" 0.202108 | \n",
"
\n",
" \n",
" 5200 | \n",
" 0.263100 | \n",
" 0.149869 | \n",
" 0.195520 | \n",
"
\n",
" \n",
"
"
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"The following columns in the evaluation set don't have a corresponding argument in `Wav2Vec2ForCTC.forward` and have been ignored: input_length.\n",
"***** Running Evaluation *****\n",
" Num examples = 1595\n",
" Batch size = 8\n",
"Saving model checkpoint to wav2vec2-xlsr-fi-train-aug-lm-1B/checkpoint-400\n",
"Configuration saved in wav2vec2-xlsr-fi-train-aug-lm-1B/checkpoint-400/config.json\n",
"Model weights saved in wav2vec2-xlsr-fi-train-aug-lm-1B/checkpoint-400/pytorch_model.bin\n",
"Configuration saved in wav2vec2-xlsr-fi-train-aug-lm-1B/checkpoint-400/preprocessor_config.json\n",
"Configuration saved in wav2vec2-xlsr-fi-train-aug-lm-1B/preprocessor_config.json\n",
"Several commits (2) will be pushed upstream.\n",
"Deleting older checkpoint [wav2vec2-xlsr-fi-train-aug-lm-1B/checkpoint-7600] due to args.save_total_limit\n",
"The following columns in the evaluation set don't have a corresponding argument in `Wav2Vec2ForCTC.forward` and have been ignored: input_length.\n",
"***** Running Evaluation *****\n",
" Num examples = 1595\n",
" Batch size = 8\n",
"Saving model checkpoint to wav2vec2-xlsr-fi-train-aug-lm-1B/checkpoint-800\n",
"Configuration saved in wav2vec2-xlsr-fi-train-aug-lm-1B/checkpoint-800/config.json\n",
"Model weights saved in wav2vec2-xlsr-fi-train-aug-lm-1B/checkpoint-800/pytorch_model.bin\n",
"Configuration saved in wav2vec2-xlsr-fi-train-aug-lm-1B/checkpoint-800/preprocessor_config.json\n",
"Configuration saved in wav2vec2-xlsr-fi-train-aug-lm-1B/preprocessor_config.json\n",
"Deleting older checkpoint [wav2vec2-xlsr-fi-train-aug-lm-1B/checkpoint-8000] due to args.save_total_limit\n",
"The following columns in the evaluation set don't have a corresponding argument in `Wav2Vec2ForCTC.forward` and have been ignored: input_length.\n",
"***** Running Evaluation *****\n",
" Num examples = 1595\n",
" Batch size = 8\n",
"Saving model checkpoint to wav2vec2-xlsr-fi-train-aug-lm-1B/checkpoint-1200\n",
"Configuration saved in wav2vec2-xlsr-fi-train-aug-lm-1B/checkpoint-1200/config.json\n",
"Model weights saved in wav2vec2-xlsr-fi-train-aug-lm-1B/checkpoint-1200/pytorch_model.bin\n",
"Configuration saved in wav2vec2-xlsr-fi-train-aug-lm-1B/checkpoint-1200/preprocessor_config.json\n",
"Deleting older checkpoint [wav2vec2-xlsr-fi-train-aug-lm-1B/checkpoint-8400] due to args.save_total_limit\n",
"The following columns in the evaluation set don't have a corresponding argument in `Wav2Vec2ForCTC.forward` and have been ignored: input_length.\n",
"***** Running Evaluation *****\n",
" Num examples = 1595\n",
" Batch size = 8\n",
"Saving model checkpoint to wav2vec2-xlsr-fi-train-aug-lm-1B/checkpoint-1600\n",
"Configuration saved in wav2vec2-xlsr-fi-train-aug-lm-1B/checkpoint-1600/config.json\n",
"Model weights saved in wav2vec2-xlsr-fi-train-aug-lm-1B/checkpoint-1600/pytorch_model.bin\n",
"Configuration saved in wav2vec2-xlsr-fi-train-aug-lm-1B/checkpoint-1600/preprocessor_config.json\n",
"Deleting older checkpoint [wav2vec2-xlsr-fi-train-aug-lm-1B/checkpoint-400] due to args.save_total_limit\n",
"The following columns in the evaluation set don't have a corresponding argument in `Wav2Vec2ForCTC.forward` and have been ignored: input_length.\n",
"***** Running Evaluation *****\n",
" Num examples = 1595\n",
" Batch size = 8\n",
"Saving model checkpoint to wav2vec2-xlsr-fi-train-aug-lm-1B/checkpoint-2000\n",
"Configuration saved in wav2vec2-xlsr-fi-train-aug-lm-1B/checkpoint-2000/config.json\n",
"Model weights saved in wav2vec2-xlsr-fi-train-aug-lm-1B/checkpoint-2000/pytorch_model.bin\n",
"Configuration saved in wav2vec2-xlsr-fi-train-aug-lm-1B/checkpoint-2000/preprocessor_config.json\n",
"Deleting older checkpoint [wav2vec2-xlsr-fi-train-aug-lm-1B/checkpoint-800] due to args.save_total_limit\n",
"The following columns in the evaluation set don't have a corresponding argument in `Wav2Vec2ForCTC.forward` and have been ignored: input_length.\n",
"***** Running Evaluation *****\n",
" Num examples = 1595\n",
" Batch size = 8\n",
"Saving model checkpoint to wav2vec2-xlsr-fi-train-aug-lm-1B/checkpoint-2400\n",
"Configuration saved in wav2vec2-xlsr-fi-train-aug-lm-1B/checkpoint-2400/config.json\n",
"Model weights saved in wav2vec2-xlsr-fi-train-aug-lm-1B/checkpoint-2400/pytorch_model.bin\n",
"Configuration saved in wav2vec2-xlsr-fi-train-aug-lm-1B/checkpoint-2400/preprocessor_config.json\n",
"Deleting older checkpoint [wav2vec2-xlsr-fi-train-aug-lm-1B/checkpoint-1200] due to args.save_total_limit\n",
"The following columns in the evaluation set don't have a corresponding argument in `Wav2Vec2ForCTC.forward` and have been ignored: input_length.\n",
"***** Running Evaluation *****\n",
" Num examples = 1595\n",
" Batch size = 8\n",
"Saving model checkpoint to wav2vec2-xlsr-fi-train-aug-lm-1B/checkpoint-2800\n",
"Configuration saved in wav2vec2-xlsr-fi-train-aug-lm-1B/checkpoint-2800/config.json\n",
"Model weights saved in wav2vec2-xlsr-fi-train-aug-lm-1B/checkpoint-2800/pytorch_model.bin\n",
"Configuration saved in wav2vec2-xlsr-fi-train-aug-lm-1B/checkpoint-2800/preprocessor_config.json\n",
"Deleting older checkpoint [wav2vec2-xlsr-fi-train-aug-lm-1B/checkpoint-1600] due to args.save_total_limit\n",
"The following columns in the evaluation set don't have a corresponding argument in `Wav2Vec2ForCTC.forward` and have been ignored: input_length.\n",
"***** Running Evaluation *****\n",
" Num examples = 1595\n",
" Batch size = 8\n",
"Saving model checkpoint to wav2vec2-xlsr-fi-train-aug-lm-1B/checkpoint-3200\n",
"Configuration saved in wav2vec2-xlsr-fi-train-aug-lm-1B/checkpoint-3200/config.json\n",
"Model weights saved in wav2vec2-xlsr-fi-train-aug-lm-1B/checkpoint-3200/pytorch_model.bin\n",
"Configuration saved in wav2vec2-xlsr-fi-train-aug-lm-1B/checkpoint-3200/preprocessor_config.json\n",
"Deleting older checkpoint [wav2vec2-xlsr-fi-train-aug-lm-1B/checkpoint-2000] due to args.save_total_limit\n",
"The following columns in the evaluation set don't have a corresponding argument in `Wav2Vec2ForCTC.forward` and have been ignored: input_length.\n",
"***** Running Evaluation *****\n",
" Num examples = 1595\n",
" Batch size = 8\n",
"Saving model checkpoint to wav2vec2-xlsr-fi-train-aug-lm-1B/checkpoint-3600\n",
"Configuration saved in wav2vec2-xlsr-fi-train-aug-lm-1B/checkpoint-3600/config.json\n",
"Model weights saved in wav2vec2-xlsr-fi-train-aug-lm-1B/checkpoint-3600/pytorch_model.bin\n",
"Configuration saved in wav2vec2-xlsr-fi-train-aug-lm-1B/checkpoint-3600/preprocessor_config.json\n",
"Deleting older checkpoint [wav2vec2-xlsr-fi-train-aug-lm-1B/checkpoint-2400] due to args.save_total_limit\n",
"The following columns in the evaluation set don't have a corresponding argument in `Wav2Vec2ForCTC.forward` and have been ignored: input_length.\n",
"***** Running Evaluation *****\n",
" Num examples = 1595\n",
" Batch size = 8\n",
"Saving model checkpoint to wav2vec2-xlsr-fi-train-aug-lm-1B/checkpoint-4000\n",
"Configuration saved in wav2vec2-xlsr-fi-train-aug-lm-1B/checkpoint-4000/config.json\n",
"Model weights saved in wav2vec2-xlsr-fi-train-aug-lm-1B/checkpoint-4000/pytorch_model.bin\n",
"Configuration saved in wav2vec2-xlsr-fi-train-aug-lm-1B/checkpoint-4000/preprocessor_config.json\n",
"Deleting older checkpoint [wav2vec2-xlsr-fi-train-aug-lm-1B/checkpoint-2800] due to args.save_total_limit\n",
"The following columns in the evaluation set don't have a corresponding argument in `Wav2Vec2ForCTC.forward` and have been ignored: input_length.\n",
"***** Running Evaluation *****\n",
" Num examples = 1595\n",
" Batch size = 8\n",
"Saving model checkpoint to wav2vec2-xlsr-fi-train-aug-lm-1B/checkpoint-4400\n",
"Configuration saved in wav2vec2-xlsr-fi-train-aug-lm-1B/checkpoint-4400/config.json\n",
"Model weights saved in wav2vec2-xlsr-fi-train-aug-lm-1B/checkpoint-4400/pytorch_model.bin\n",
"Configuration saved in wav2vec2-xlsr-fi-train-aug-lm-1B/checkpoint-4400/preprocessor_config.json\n",
"Deleting older checkpoint [wav2vec2-xlsr-fi-train-aug-lm-1B/checkpoint-3200] due to args.save_total_limit\n",
"The following columns in the evaluation set don't have a corresponding argument in `Wav2Vec2ForCTC.forward` and have been ignored: input_length.\n",
"***** Running Evaluation *****\n",
" Num examples = 1595\n",
" Batch size = 8\n",
"Saving model checkpoint to wav2vec2-xlsr-fi-train-aug-lm-1B/checkpoint-4800\n",
"Configuration saved in wav2vec2-xlsr-fi-train-aug-lm-1B/checkpoint-4800/config.json\n",
"Model weights saved in wav2vec2-xlsr-fi-train-aug-lm-1B/checkpoint-4800/pytorch_model.bin\n",
"Configuration saved in wav2vec2-xlsr-fi-train-aug-lm-1B/checkpoint-4800/preprocessor_config.json\n",
"Deleting older checkpoint [wav2vec2-xlsr-fi-train-aug-lm-1B/checkpoint-3600] due to args.save_total_limit\n",
"The following columns in the evaluation set don't have a corresponding argument in `Wav2Vec2ForCTC.forward` and have been ignored: input_length.\n",
"***** Running Evaluation *****\n",
" Num examples = 1595\n",
" Batch size = 8\n",
"Saving model checkpoint to wav2vec2-xlsr-fi-train-aug-lm-1B/checkpoint-5200\n",
"Configuration saved in wav2vec2-xlsr-fi-train-aug-lm-1B/checkpoint-5200/config.json\n",
"Model weights saved in wav2vec2-xlsr-fi-train-aug-lm-1B/checkpoint-5200/pytorch_model.bin\n",
"Configuration saved in wav2vec2-xlsr-fi-train-aug-lm-1B/checkpoint-5200/preprocessor_config.json\n",
"Deleting older checkpoint [wav2vec2-xlsr-fi-train-aug-lm-1B/checkpoint-4000] due to args.save_total_limit\n",
"\n",
"\n",
"Training completed. Do not forget to share your model on huggingface.co/models =)\n",
"\n",
"\n",
"Loading best model from wav2vec2-xlsr-fi-train-aug-lm-1B/checkpoint-5200 (score: 0.19551996756537604).\n"
]
},
{
"data": {
"text/plain": [
"TrainOutput(global_step=5544, training_loss=0.3942209981048606, metrics={'train_runtime': 15698.6908, 'train_samples_per_second': 5.65, 'train_steps_per_second': 0.353, 'total_flos': 5.82002597856266e+19, 'train_loss': 0.3942209981048606, 'epoch': 4.0})"
]
},
"execution_count": 52,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"import numpy as np\n",
"trainer.train()"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "a9q4mgMZplr_"
},
"source": [
"The training loss and validation WER go down nicely."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "4Ya7WEy0pd13"
},
"source": [
"You can now upload the result of the training to the 🤗 Hub, just execute this instruction:"
]
},
{
"cell_type": "code",
"execution_count": 53,
"metadata": {
"id": "ArG1Thf6NBWm",
"outputId": "62ef1c3d-786c-4e25-f9c5-4020e71aa298"
},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"Saving model checkpoint to wav2vec2-xlsr-fi-train-aug-lm-1B\n",
"Configuration saved in wav2vec2-xlsr-fi-train-aug-lm-1B/config.json\n",
"Model weights saved in wav2vec2-xlsr-fi-train-aug-lm-1B/pytorch_model.bin\n",
"Configuration saved in wav2vec2-xlsr-fi-train-aug-lm-1B/preprocessor_config.json\n",
"Several commits (2) will be pushed upstream.\n",
"The progress bars may be unreliable.\n"
]
},
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "b59f278120294e64a1192c8a6ec7c84d",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
"Upload file pytorch_model.bin: 0%| | 3.37k/3.59G [00:00, ?B/s]"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"To https://huggingface.co/RASMUS/wav2vec2-xlsr-fi-train-aug-lm-1B\n",
" 53621f5..ddf9cad main -> main\n",
"\n",
"Dropping the following result as it does not have all the necessary fields:\n",
"{}\n",
"To https://huggingface.co/RASMUS/wav2vec2-xlsr-fi-train-aug-lm-1B\n",
" ddf9cad..e13053b main -> main\n",
"\n"
]
},
{
"data": {
"text/plain": [
"'https://huggingface.co/RASMUS/wav2vec2-xlsr-fi-train-aug-lm-1B/commit/ddf9cadee318e2732e136ec4aa2789c0de8e06fa'"
]
},
"execution_count": 53,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"trainer.push_to_hub()"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "RHIVc44_fY2N"
},
"source": [
"You can now share this model with all your friends, family, favorite pets: they can all load it with the identifier \"your-username/the-name-you-picked\" so for instance:"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "5lWWIKyBpx1h"
},
"source": [
"```python\n",
"from transformers import AutoModelForCTC, Wav2Vec2Processor\n",
"\n",
"model = AutoModelForCTC.from_pretrained(\"patrickvonplaten/wav2vec2-large-xls-r-300m-tr-colab\")\n",
"processor = Wav2Vec2Processor.from_pretrained(\"patrickvonplaten/wav2vec2-large-xls-r-300m-tr-colab\")\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "pmi1cX0fRBit"
},
"source": [
"For more examples of how XLS-R can be fine-tuned, please take a look at the [official speech recognition examples](https://github.com/huggingface/transformers/tree/master/examples/pytorch/speech-recognition#examples)."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "L8b8Qkoy3KyS"
},
"source": [
"### Evaluation\n",
"\n",
"As a final check, let's load the model and verify that it indeed has learned to transcribe Turkish speech.\n",
"\n",
"Let's first load the pretrained checkpoint."
]
},
{
"cell_type": "code",
"execution_count": 45,
"metadata": {
"collapsed": true,
"id": "R351I9IQp_9D",
"jupyter": {
"outputs_hidden": true
},
"outputId": "f2a2ee99-7db6-4962-e140-0107054102d3"
},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"loading configuration file wav2vec2-xlsr-fi-lm-1B/config.json\n",
"Model config Wav2Vec2Config {\n",
" \"_name_or_path\": \"facebook/wav2vec2-xls-r-1b\",\n",
" \"activation_dropout\": 0.055,\n",
" \"adapter_kernel_size\": 3,\n",
" \"adapter_stride\": 2,\n",
" \"add_adapter\": false,\n",
" \"apply_spec_augment\": true,\n",
" \"architectures\": [\n",
" \"Wav2Vec2ForCTC\"\n",
" ],\n",
" \"attention_dropout\": 0.094,\n",
" \"bos_token_id\": 1,\n",
" \"classifier_proj_size\": 256,\n",
" \"codevector_dim\": 1024,\n",
" \"contrastive_logits_temperature\": 0.1,\n",
" \"conv_bias\": true,\n",
" \"conv_dim\": [\n",
" 512,\n",
" 512,\n",
" 512,\n",
" 512,\n",
" 512,\n",
" 512,\n",
" 512\n",
" ],\n",
" \"conv_kernel\": [\n",
" 10,\n",
" 3,\n",
" 3,\n",
" 3,\n",
" 3,\n",
" 2,\n",
" 2\n",
" ],\n",
" \"conv_stride\": [\n",
" 5,\n",
" 2,\n",
" 2,\n",
" 2,\n",
" 2,\n",
" 2,\n",
" 2\n",
" ],\n",
" \"ctc_loss_reduction\": \"mean\",\n",
" \"ctc_zero_infinity\": false,\n",
" \"diversity_loss_weight\": 0.1,\n",
" \"do_stable_layer_norm\": true,\n",
" \"eos_token_id\": 2,\n",
" \"feat_extract_activation\": \"gelu\",\n",
" \"feat_extract_dropout\": 0.0,\n",
" \"feat_extract_norm\": \"layer\",\n",
" \"feat_proj_dropout\": 0.04,\n",
" \"feat_quantizer_dropout\": 0.0,\n",
" \"final_dropout\": 0.0,\n",
" \"gradient_checkpointing\": false,\n",
" \"hidden_act\": \"gelu\",\n",
" \"hidden_dropout\": 0.047,\n",
" \"hidden_size\": 1280,\n",
" \"initializer_range\": 0.02,\n",
" \"intermediate_size\": 5120,\n",
" \"layer_norm_eps\": 1e-05,\n",
" \"layerdrop\": 0.041,\n",
" \"mask_feature_length\": 10,\n",
" \"mask_feature_min_masks\": 0,\n",
" \"mask_feature_prob\": 0.0,\n",
" \"mask_time_length\": 10,\n",
" \"mask_time_min_masks\": 2,\n",
" \"mask_time_prob\": 0.082,\n",
" \"model_type\": \"wav2vec2\",\n",
" \"num_adapter_layers\": 3,\n",
" \"num_attention_heads\": 16,\n",
" \"num_codevector_groups\": 2,\n",
" \"num_codevectors_per_group\": 320,\n",
" \"num_conv_pos_embedding_groups\": 16,\n",
" \"num_conv_pos_embeddings\": 128,\n",
" \"num_feat_extract_layers\": 7,\n",
" \"num_hidden_layers\": 48,\n",
" \"num_negatives\": 100,\n",
" \"output_hidden_size\": 1280,\n",
" \"pad_token_id\": 33,\n",
" \"proj_codevector_dim\": 1024,\n",
" \"tdnn_dilation\": [\n",
" 1,\n",
" 2,\n",
" 3,\n",
" 1,\n",
" 1\n",
" ],\n",
" \"tdnn_dim\": [\n",
" 512,\n",
" 512,\n",
" 512,\n",
" 512,\n",
" 1500\n",
" ],\n",
" \"tdnn_kernel\": [\n",
" 5,\n",
" 3,\n",
" 3,\n",
" 1,\n",
" 1\n",
" ],\n",
" \"torch_dtype\": \"float32\",\n",
" \"transformers_version\": \"4.16.0.dev0\",\n",
" \"use_weighted_layer_sum\": false,\n",
" \"vocab_size\": 36,\n",
" \"xvector_output_dim\": 512\n",
"}\n",
"\n",
"loading weights file wav2vec2-xlsr-fi-lm-1B/pytorch_model.bin\n",
"All model checkpoint weights were used when initializing Wav2Vec2ForCTC.\n",
"\n",
"All the weights of Wav2Vec2ForCTC were initialized from the model checkpoint at wav2vec2-xlsr-fi-lm-1B.\n",
"If your task is similar to the task the model of the checkpoint was trained on, you can already use Wav2Vec2ForCTC for predictions without further training.\n",
"loading feature extractor configuration file wav2vec2-xlsr-fi-lm-1B/preprocessor_config.json\n",
"Feature extractor Wav2Vec2FeatureExtractor {\n",
" \"do_normalize\": true,\n",
" \"feature_extractor_type\": \"Wav2Vec2FeatureExtractor\",\n",
" \"feature_size\": 1,\n",
" \"padding_side\": \"right\",\n",
" \"padding_value\": 0.0,\n",
" \"return_attention_mask\": true,\n",
" \"sampling_rate\": 16000\n",
"}\n",
"\n",
"Didn't find file wav2vec2-xlsr-fi-lm-1B/tokenizer.json. We won't load it.\n",
"loading file wav2vec2-xlsr-fi-lm-1B/vocab.json\n",
"loading file wav2vec2-xlsr-fi-lm-1B/tokenizer_config.json\n",
"loading file wav2vec2-xlsr-fi-lm-1B/added_tokens.json\n",
"loading file wav2vec2-xlsr-fi-lm-1B/special_tokens_map.json\n",
"loading file None\n",
"Adding to the vocabulary\n",
"Adding to the vocabulary\n"
]
}
],
"source": [
"model = Wav2Vec2ForCTC.from_pretrained(repo_name).to(\"cuda\")\n",
"processor = Wav2Vec2Processor.from_pretrained(repo_name)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "jD7TZ1YS3S_K"
},
"source": [
"\n",
"Now, we will just take the first example of the test set, run it through the model and take the `argmax(...)` of the logits to retrieve the predicted token ids."
]
},
{
"cell_type": "code",
"execution_count": 47,
"metadata": {
"id": "pax07TnL3WZn",
"outputId": "867787ff-0cb7-41e9-f926-96f7b53e7134"
},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"It is strongly recommended to pass the ``sampling_rate`` argument to this function. Failing to do so can result in silent errors that might be hard to debug.\n"
]
}
],
"source": [
"input_dict = processor(common_voice_test_audio[0][\"input_values\"], return_tensors=\"pt\", padding=True)\n",
"\n",
"logits = model(input_dict.input_values.to(\"cuda\")).logits\n",
"\n",
"pred_ids = torch.argmax(logits, dim=-1)[0]"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "7nkzSQu53Zs2"
},
"source": [
"We adapted `common_voice_test` quite a bit so that the dataset instance does not contain the original sentence label anymore. Thus, we re-use the original dataset to get the label of the first example."
]
},
{
"cell_type": "code",
"execution_count": 48,
"metadata": {
"id": "fe2AE-2xqKHx",
"outputId": "1d8321b3-4f41-4d71-e74e-f33f32a7b261"
},
"outputs": [
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "cc560e0bf71f49ae8d914321b060cd23",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
"Downloading: 0%| | 0.00/4.62k [00:00, ?B/s]"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "61c0cfa14df94e70ae1373c6acbb83d7",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
"Downloading: 0%| | 0.00/10.7k [00:00, ?B/s]"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"Using custom data configuration fi-00b7e43d66b8c1a3\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"Downloading and preparing dataset common_voice/fi (download: 47.57 MiB, generated: 53.09 MiB, post-processed: Unknown size, total: 100.66 MiB) to /workspace/.cache/huggingface/datasets/common_voice/fi-00b7e43d66b8c1a3/6.1.0/5693bfc0feeade582a78c2fb250bc88f52bd86f0a7f1bb22bfee67e715de30fd...\n"
]
},
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "8c368749d80649938983a19e806fa178",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
"Downloading: 0%| | 0.00/49.9M [00:00, ?B/s]"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
"0 examples [00:00, ? examples/s]"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
"0 examples [00:00, ? examples/s]"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
"0 examples [00:00, ? examples/s]"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
"0 examples [00:00, ? examples/s]"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
"0 examples [00:00, ? examples/s]"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"Dataset common_voice downloaded and prepared to /workspace/.cache/huggingface/datasets/common_voice/fi-00b7e43d66b8c1a3/6.1.0/5693bfc0feeade582a78c2fb250bc88f52bd86f0a7f1bb22bfee67e715de30fd. Subsequent calls will reuse this data.\n"
]
}
],
"source": [
"common_voice_test_transcription = load_dataset(\"common_voice\", \"fi\", data_dir=\"./cv-corpus-6.1-2020-12-11\", split=\"test\")"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "epu8kCQZ3h70"
},
"source": [
"\n",
"Finally, we can decode the example."
]
},
{
"cell_type": "code",
"execution_count": 49,
"metadata": {
"id": "K4xWqmk_qMn0",
"outputId": "d9e40b3c-f02a-48a1-d081-6d7e8b37dcaf"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Prediction:\n",
"nyt en misään tapauksessa sinula auttaa\n",
"\n",
"Reference:\n",
"tiibetinspanielit olivat luostareiden ja ylhäisön arvostettuja seurakoiria\n"
]
}
],
"source": [
"print(\"Prediction:\")\n",
"print(processor.decode(pred_ids))\n",
"\n",
"print(\"\\nReference:\")\n",
"print(common_voice_test_transcription[0][\"sentence\"].lower())"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "HwhyoMml3oOT"
},
"source": [
"Alright! The transcription can definitely be recognized from our prediction, but it is not perfect yet. Training the model a bit longer, spending more time on the data preprocessing, and especially using a language model for decoding would certainly improve the model's overall performance.\n",
"\n",
"For a demonstration model on a low-resource language, the results are quite acceptable however 🤗."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.8"
}
},
"nbformat": 4,
"nbformat_minor": 4
}