--- license: apache-2.0 base_model: Bisher/wav2vec2_ASV_deepfake_audio_detection_DF_finetune_frozen tags: - generated_from_trainer datasets: - audiofolder metrics: - accuracy - precision - recall - f1 model-index: - name: ourData_train results: - task: name: Audio Classification type: audio-classification dataset: name: audiofolder type: audiofolder config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.9790442116023511 - name: Precision type: precision value: 0.9805875236043416 - name: Recall type: recall value: 0.9790442116023511 - name: F1 type: f1 value: 0.979465951529853 --- [Visualize in Weights & Biases](https://wandb.ai/bishertello-/huggingface/runs/m9t2y29z) # ourData_train This model is a fine-tuned version of [Bisher/wav2vec2_ASV_deepfake_audio_detection_DF_finetune_frozen](https://huggingface.co/Bisher/wav2vec2_ASV_deepfake_audio_detection_DF_finetune_frozen) on the audiofolder dataset. It achieves the following results on the evaluation set: - Loss: 0.0655 - Accuracy: 0.9790 - Precision: 0.9806 - Recall: 0.9790 - F1: 0.9795 - Tp: 518 - Tn: 3313 - Fn: 14 - Fp: 68 - Eer: 0.0216 - Min Tdcf: 0.0069 - Auc Roc: 0.9984 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | Tp | Tn | Fn | Fp | Eer | Min Tdcf | Auc Roc | |:-------------:|:------:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|:---:|:----:|:---:|:----:|:------:|:--------:|:-------:| | 2.0925 | 0.0816 | 5 | 2.4908 | 0.3095 | 0.7455 | 0.3095 | 0.3596 | 376 | 835 | 156 | 2546 | 0.5167 | 0.0499 | 0.4827 | | 1.8849 | 0.1633 | 10 | 1.8190 | 0.4613 | 0.7686 | 0.4613 | 0.5381 | 303 | 1502 | 229 | 1879 | 0.4874 | 0.05 | 0.5170 | | 1.1783 | 0.2449 | 15 | 0.9851 | 0.3330 | 0.8128 | 0.3330 | 0.3755 | 455 | 848 | 77 | 2533 | 0.3856 | 0.0499 | 0.6498 | | 0.661 | 0.3265 | 20 | 0.5800 | 0.8684 | 0.8702 | 0.8684 | 0.8693 | 283 | 3115 | 249 | 266 | 0.2256 | 0.0446 | 0.8570 | | 0.5967 | 0.4082 | 25 | 0.4366 | 0.8433 | 0.9017 | 0.8433 | 0.8609 | 448 | 2852 | 84 | 529 | 0.1559 | 0.0338 | 0.9265 | | 0.3993 | 0.4898 | 30 | 0.2574 | 0.8919 | 0.9281 | 0.8919 | 0.9020 | 488 | 3002 | 44 | 379 | 0.0973 | 0.0228 | 0.9650 | | 0.25 | 0.5714 | 35 | 0.1732 | 0.9422 | 0.9512 | 0.9422 | 0.9449 | 487 | 3200 | 45 | 181 | 0.0636 | 0.0147 | 0.9811 | | 0.2353 | 0.6531 | 40 | 0.3041 | 0.9540 | 0.9526 | 0.9540 | 0.9529 | 413 | 3320 | 119 | 61 | 0.0769 | 0.0177 | 0.9787 | | 0.1748 | 0.7347 | 45 | 0.1481 | 0.9601 | 0.9645 | 0.9601 | 0.9614 | 499 | 3258 | 33 | 123 | 0.0515 | 0.0126 | 0.9904 | | 0.1273 | 0.8163 | 50 | 0.1373 | 0.9698 | 0.9702 | 0.9698 | 0.9700 | 479 | 3316 | 53 | 65 | 0.0451 | 0.0117 | 0.9939 | | 0.143 | 0.8980 | 55 | 0.2027 | 0.9640 | 0.9635 | 0.9640 | 0.9637 | 453 | 3319 | 79 | 62 | 0.0545 | 0.0130 | 0.9910 | | 0.1021 | 0.9796 | 60 | 0.1321 | 0.9701 | 0.9710 | 0.9701 | 0.9704 | 488 | 3308 | 44 | 73 | 0.0494 | 0.0101 | 0.9939 | | 0.0694 | 1.0612 | 65 | 0.1845 | 0.9612 | 0.9626 | 0.9612 | 0.9617 | 475 | 3286 | 57 | 95 | 0.0564 | 0.0123 | 0.9906 | | 0.0665 | 1.1429 | 70 | 0.1669 | 0.9681 | 0.9682 | 0.9681 | 0.9681 | 473 | 3315 | 59 | 66 | 0.0447 | 0.0116 | 0.9940 | | 0.069 | 1.2245 | 75 | 0.1528 | 0.9691 | 0.9698 | 0.9691 | 0.9694 | 483 | 3309 | 49 | 72 | 0.0429 | 0.0114 | 0.9950 | | 0.042 | 1.3061 | 80 | 0.1797 | 0.9693 | 0.9701 | 0.9693 | 0.9696 | 485 | 3308 | 47 | 73 | 0.0438 | 0.0103 | 0.9946 | | 0.0689 | 1.3878 | 85 | 0.1625 | 0.9711 | 0.9718 | 0.9711 | 0.9714 | 488 | 3312 | 44 | 69 | 0.0399 | 0.0106 | 0.9956 | | 0.0739 | 1.4694 | 90 | 0.0861 | 0.9750 | 0.9768 | 0.9750 | 0.9755 | 512 | 3303 | 20 | 78 | 0.0282 | 0.0080 | 0.9976 | | 0.0556 | 1.5510 | 95 | 0.1952 | 0.9778 | 0.9775 | 0.9778 | 0.9775 | 474 | 3352 | 58 | 29 | 0.0358 | 0.0095 | 0.9963 | | 0.056 | 1.6327 | 100 | 0.1888 | 0.9767 | 0.9764 | 0.9767 | 0.9765 | 472 | 3350 | 60 | 31 | 0.0364 | 0.0087 | 0.9959 | | 0.0312 | 1.7143 | 105 | 0.1572 | 0.9783 | 0.9781 | 0.9783 | 0.9781 | 482 | 3346 | 50 | 35 | 0.0376 | 0.0088 | 0.9962 | | 0.0532 | 1.7959 | 110 | 0.1333 | 0.9775 | 0.9778 | 0.9775 | 0.9776 | 494 | 3331 | 38 | 50 | 0.0338 | 0.0088 | 0.9967 | | 0.0575 | 1.8776 | 115 | 0.0958 | 0.9798 | 0.9805 | 0.9798 | 0.9800 | 507 | 3327 | 25 | 54 | 0.0301 | 0.0080 | 0.9976 | | 0.0624 | 1.9592 | 120 | 0.0655 | 0.9790 | 0.9806 | 0.9790 | 0.9795 | 518 | 3313 | 14 | 68 | 0.0216 | 0.0069 | 0.9984 | ### Framework versions - Transformers 4.42.4 - Pytorch 2.3.1+cu121 - Datasets 2.21.0 - Tokenizers 0.19.1