Edit model card

test_model_dir

This model is a fine-tuned version of facebook/wav2vec2-base on the minds14 dataset. It achieves the following results on the evaluation set:

  • Loss: 2.7181
  • Accuracy: 0.0885

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 3e-05
  • train_batch_size: 64
  • eval_batch_size: 64
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 256
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 100

Training results

Training Loss Epoch Step Validation Loss Accuracy
No log 1.0 2 2.6433 0.0354
No log 2.0 4 2.6427 0.0265
No log 3.0 6 2.6412 0.0619
No log 4.0 8 2.6391 0.0885
2.6378 5.0 10 2.6384 0.1239
2.6378 6.0 12 2.6380 0.0973
2.6378 7.0 14 2.6375 0.0708
2.6378 8.0 16 2.6415 0.0796
2.6378 9.0 18 2.6399 0.0531
2.6288 10.0 20 2.6450 0.0796
2.6288 11.0 22 2.6450 0.0619
2.6288 12.0 24 2.6452 0.0708
2.6288 13.0 26 2.6479 0.0708
2.6288 14.0 28 2.6496 0.0619
2.6185 15.0 30 2.6522 0.0796
2.6185 16.0 32 2.6558 0.0796
2.6185 17.0 34 2.6567 0.0708
2.6185 18.0 36 2.6572 0.0619
2.6185 19.0 38 2.6611 0.0619
2.6069 20.0 40 2.6629 0.0619
2.6069 21.0 42 2.6621 0.0531
2.6069 22.0 44 2.6663 0.0531
2.6069 23.0 46 2.6672 0.0442
2.6069 24.0 48 2.6645 0.0531
2.599 25.0 50 2.6670 0.0708
2.599 26.0 52 2.6692 0.0531
2.599 27.0 54 2.6653 0.0708
2.599 28.0 56 2.6669 0.0885
2.599 29.0 58 2.6797 0.0619
2.5767 30.0 60 2.6781 0.0354
2.5767 31.0 62 2.6861 0.0265
2.5767 32.0 64 2.6852 0.0442
2.5767 33.0 66 2.6733 0.0442
2.5767 34.0 68 2.6881 0.0708
2.5771 35.0 70 2.6800 0.0708
2.5771 36.0 72 2.6777 0.0619
2.5771 37.0 74 2.6761 0.0708
2.5771 38.0 76 2.6657 0.0619
2.5771 39.0 78 2.6667 0.0708
2.5636 40.0 80 2.6681 0.0708
2.5636 41.0 82 2.6649 0.0796
2.5636 42.0 84 2.6598 0.0796
2.5636 43.0 86 2.6627 0.0619
2.5636 44.0 88 2.6596 0.0796
2.5608 45.0 90 2.6511 0.0796
2.5608 46.0 92 2.6522 0.0708
2.5608 47.0 94 2.6610 0.0708
2.5608 48.0 96 2.6638 0.0531
2.5608 49.0 98 2.6642 0.0619
2.5432 50.0 100 2.6596 0.0796
2.5432 51.0 102 2.6675 0.0885
2.5432 52.0 104 2.6964 0.0885
2.5432 53.0 106 2.7030 0.0531
2.5432 54.0 108 2.7016 0.0531
2.5295 55.0 110 2.6918 0.0619
2.5295 56.0 112 2.6893 0.0619
2.5295 57.0 114 2.6936 0.0708
2.5295 58.0 116 2.6905 0.0885
2.5295 59.0 118 2.6838 0.0796
2.5207 60.0 120 2.6845 0.0708
2.5207 61.0 122 2.6896 0.0708
2.5207 62.0 124 2.6965 0.0796
2.5207 63.0 126 2.6971 0.1062
2.5207 64.0 128 2.6982 0.0973
2.5015 65.0 130 2.7037 0.0885
2.5015 66.0 132 2.7065 0.0973
2.5015 67.0 134 2.7078 0.0973
2.5015 68.0 136 2.7055 0.0973
2.5015 69.0 138 2.7023 0.0973
2.4869 70.0 140 2.6923 0.1062
2.4869 71.0 142 2.6906 0.1062
2.4869 72.0 144 2.6989 0.1062
2.4869 73.0 146 2.7078 0.0885
2.4869 74.0 148 2.7106 0.0973
2.4638 75.0 150 2.7117 0.0796
2.4638 76.0 152 2.7119 0.0796
2.4638 77.0 154 2.7153 0.0708
2.4638 78.0 156 2.7111 0.0708
2.4638 79.0 158 2.7086 0.0885
2.4408 80.0 160 2.7000 0.1150
2.4408 81.0 162 2.6915 0.1062
2.4408 82.0 164 2.6907 0.1062
2.4408 83.0 166 2.6908 0.0973
2.4408 84.0 168 2.6926 0.0796
2.4688 85.0 170 2.6984 0.1062
2.4688 86.0 172 2.7039 0.1062
2.4688 87.0 174 2.7053 0.0973
2.4688 88.0 176 2.7098 0.0796
2.4688 89.0 178 2.7100 0.0885
2.4379 90.0 180 2.7113 0.1062
2.4379 91.0 182 2.7121 0.0973
2.4379 92.0 184 2.7127 0.0973
2.4379 93.0 186 2.7162 0.0973
2.4379 94.0 188 2.7189 0.0973
2.4385 95.0 190 2.7199 0.0885
2.4385 96.0 192 2.7186 0.0796
2.4385 97.0 194 2.7182 0.0885
2.4385 98.0 196 2.7183 0.0885
2.4385 99.0 198 2.7182 0.0885
2.4402 100.0 200 2.7181 0.0885

Framework versions

  • Transformers 4.44.2
  • Pytorch 2.4.0+cu121
  • Datasets 2.21.0
  • Tokenizers 0.19.1
Downloads last month
7
Safetensors
Model size
94.6M params
Tensor type
F32
·
Inference API
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for sixcarben/test_model_dir

Finetuned
this model

Evaluation results