Edit model card

wav2vec2-darija

This model is a fine-tuned version of facebook/wav2vec2-large-xlsr-53 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.4265
  • Wer: 0.4101
  • Cer: 0.1327

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 1000
  • num_epochs: 30

Training results

Training Loss Epoch Step Validation Loss Wer Cer
10.5265 0.49 500 3.2082 1.0 0.9971
2.9629 0.98 1000 2.9619 1.0000 0.9357
2.6527 1.46 1500 1.4934 1.0974 0.4550
1.2946 1.95 2000 0.6904 0.7809 0.2566
0.9059 2.44 2500 0.5906 0.7002 0.2190
0.8133 2.93 3000 0.5199 0.6382 0.2049
0.7026 3.41 3500 0.4570 0.5986 0.1919
0.6572 3.9 4000 0.4238 0.5792 0.1848
0.5904 4.39 4500 0.4116 0.5637 0.1807
0.5593 4.88 5000 0.3850 0.5414 0.1734
0.5084 5.37 5500 0.3951 0.5409 0.1733
0.5033 5.85 6000 0.4449 0.5176 0.1649
0.4712 6.34 6500 0.5485 0.5172 0.1658
0.4459 6.83 7000 0.5259 0.5061 0.1623
0.4246 7.32 7500 0.3686 0.4991 0.1605
0.4261 7.8 8000 0.3663 0.4898 0.1589
0.4078 8.29 8500 0.3740 0.4858 0.1564
0.3783 8.78 9000 0.3907 0.4824 0.1566
0.3647 9.27 9500 0.3424 0.4750 0.1525
0.3527 9.76 10000 0.3444 0.4692 0.1513
0.3482 10.24 10500 0.3856 0.4692 0.1507
0.3338 10.73 11000 0.3650 0.4664 0.1512
0.3198 11.22 11500 0.3516 0.4628 0.1492
0.3218 11.71 12000 0.3660 0.4644 0.1491
0.3115 12.2 12500 0.3490 0.4545 0.1475
0.2977 12.68 13000 0.3555 0.4504 0.1451
0.2958 13.17 13500 0.3425 0.4571 0.1449
0.278 13.66 14000 0.4035 0.4520 0.1446
0.2716 14.15 14500 0.3552 0.4492 0.1437
0.2729 14.63 15000 0.3665 0.4470 0.1432
0.2691 15.12 15500 0.3700 0.4498 0.1444
0.2563 15.61 16000 0.3658 0.4423 0.1421
0.2511 16.1 16500 0.4152 0.4408 0.1425
0.2348 16.59 17000 0.4889 0.4375 0.1416
0.2437 17.07 17500 0.4209 0.4382 0.1413
0.2388 17.56 18000 0.6032 0.4359 0.1408
0.2235 18.05 18500 0.4831 0.4369 0.1402
0.2197 18.54 19000 0.4989 0.4345 0.1402
0.2285 19.02 19500 0.5929 0.4342 0.1393
0.2224 19.51 20000 0.4098 0.4317 0.1398
0.2183 20.0 20500 0.3547 0.4254 0.1384
0.2113 20.49 21000 0.3926 0.4324 0.1385
0.2125 20.98 21500 0.3982 0.4299 0.1386
0.201 21.46 22000 0.3929 0.4293 0.1389
0.2002 21.95 22500 0.4047 0.4218 0.1372
0.2029 22.44 23000 0.5153 0.4235 0.1375
0.195 22.93 23500 0.5601 0.4198 0.1364
0.182 23.41 24000 0.4596 0.4168 0.1355
0.1889 23.9 24500 0.4165 0.4209 0.1353
0.1795 24.39 25000 0.4096 0.4185 0.1352
0.1809 24.88 25500 0.4732 0.4126 0.1341
0.1762 25.37 26000 0.4324 0.4146 0.1347
0.1764 25.85 26500 0.4462 0.4160 0.1347
0.1805 26.34 27000 0.3955 0.4107 0.1333
0.1733 26.83 27500 0.4182 0.4135 0.1336
0.1651 27.32 28000 0.4111 0.4104 0.1330
0.1713 27.8 28500 0.4426 0.4126 0.1332
0.1766 28.29 29000 0.4426 0.4085 0.1328
0.1631 28.78 29500 0.4248 0.4083 0.1328
0.1608 29.27 30000 0.4334 0.4096 0.1327
0.1688 29.76 30500 0.4265 0.4101 0.1327

Framework versions

  • Transformers 4.34.0
  • Pytorch 2.1.0+cu121
  • Datasets 2.14.5
  • Tokenizers 0.14.1
Downloads last month
9
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for khaoulaoub/wav2vec2-darija

Finetuned
(206)
this model