Yassmen commited on
Commit
d70472b
1 Parent(s): 6b297fc

End of training

Browse files
README.md CHANGED
@@ -14,23 +14,17 @@ model-index:
14
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
  should probably proofread and complete it, then remove this comment. -->
16
 
17
- [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/yassmenyoussef55-arete-global/huggingface/runs/z6qr57oj)
 
18
  # mixed_model_finetuned_cremad
19
 
20
- This model was trained from scratch on [CremaD dataset](https://github.com/CheyneyComputerScience/CREMA-D) , To create this multimodal model, we employed wav2vec2, pretrained on audio data, and resnet3d, pretrained on video data
21
-
22
- This dataset provides 7442 samples of recordings from actors performing on 6 different emotions in English, which are:
23
-
24
- ```python
25
- emotions = ['angry', 'disgust', 'fearful', 'happy', 'neutral', 'sad']
26
- ```
27
-
28
  It achieves the following results on the evaluation set:
29
- - Loss: 0.5007
30
- - Accuracy: 0.8293
31
- - F1: 0.8284
32
- - Recall: 0.8293
33
- - Precision: 0.8304
34
 
35
  ## Model description
36
 
@@ -50,24 +44,25 @@ More information needed
50
 
51
  The following hyperparameters were used during training:
52
  - learning_rate: 0.0001
53
- - train_batch_size: 2
54
- - eval_batch_size: 2
55
  - seed: 42
56
  - gradient_accumulation_steps: 8
57
- - total_train_batch_size: 16
58
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
59
  - lr_scheduler_type: linear
60
  - lr_scheduler_warmup_ratio: 0.1
61
- - num_epochs: 3.0
62
  - mixed_precision_training: Native AMP
63
 
64
  ### Training results
65
 
66
  | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Recall | Precision |
67
  |:-------------:|:------:|:----:|:---------------:|:--------:|:------:|:------:|:---------:|
68
- | 0.6929 | 0.9976 | 371 | 0.8748 | 0.6747 | 0.6698 | 0.6747 | 0.7234 |
69
- | 0.5794 | 1.9980 | 743 | 0.5378 | 0.8004 | 0.7999 | 0.8004 | 0.8047 |
70
- | 0.3597 | 2.9929 | 1113 | 0.5007 | 0.8293 | 0.8284 | 0.8293 | 0.8304 |
 
71
 
72
 
73
  ### Framework versions
 
14
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
  should probably proofread and complete it, then remove this comment. -->
16
 
17
+ [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/yassmenyoussef55-arete-global/huggingface/runs/gt6e5ppa)
18
+ [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/yassmenyoussef55-arete-global/huggingface/runs/gt6e5ppa)
19
  # mixed_model_finetuned_cremad
20
 
21
+ This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
 
 
 
 
 
 
 
22
  It achieves the following results on the evaluation set:
23
+ - Loss: 0.3098
24
+ - Accuracy: 0.8972
25
+ - F1: 0.8960
26
+ - Recall: 0.8972
27
+ - Precision: 0.8974
28
 
29
  ## Model description
30
 
 
44
 
45
  The following hyperparameters were used during training:
46
  - learning_rate: 0.0001
47
+ - train_batch_size: 4
48
+ - eval_batch_size: 4
49
  - seed: 42
50
  - gradient_accumulation_steps: 8
51
+ - total_train_batch_size: 32
52
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
53
  - lr_scheduler_type: linear
54
  - lr_scheduler_warmup_ratio: 0.1
55
+ - training_steps: 743
56
  - mixed_precision_training: Native AMP
57
 
58
  ### Training results
59
 
60
  | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Recall | Precision |
61
  |:-------------:|:------:|:----:|:---------------:|:--------:|:------:|:------:|:---------:|
62
+ | 0.7914 | 1.0 | 186 | 1.0595 | 0.7171 | 0.7074 | 0.7171 | 0.7536 |
63
+ | 0.5971 | 2.0 | 372 | 0.4401 | 0.8414 | 0.8375 | 0.8414 | 0.8443 |
64
+ | 0.2891 | 3.0 | 558 | 0.3863 | 0.8548 | 0.8539 | 0.8548 | 0.8622 |
65
+ | 0.1833 | 3.9946 | 743 | 0.3098 | 0.8972 | 0.8960 | 0.8972 | 0.8974 |
66
 
67
 
68
  ### Framework versions
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:450e796376b8871ab8fd210118ba93fde1462e0d7596d3615638570c66e2b83e
3
  size 1609602280
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:494740b37cd604ee78f856c9a6104509b2f186e7a25c9e49308f7a117aaf8828
3
  size 1609602280
runs/Jul26_18-45-48_803b3d907cf1/events.out.tfevents.1722019556.803b3d907cf1.34.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:388891396529059945bb22a30fc89d225c47648157d5cb605b0eb76027e26255
3
+ size 4475
runs/Jul26_18-53-42_803b3d907cf1/events.out.tfevents.1722020025.803b3d907cf1.34.1 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:66338f084fee8944d276a9381147f1bf753b9b20bb16d47d3d0494048c151a8e
3
+ size 4475
runs/Jul26_18-56-13_803b3d907cf1/events.out.tfevents.1722020176.803b3d907cf1.34.2 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5e55eabe0e2558f6f88001d6da044c58fafb2f21862dec49b0fe8dd4883a00e0
3
+ size 22282
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:3d31818668a981c50e1942e3e04d98d57964c39f693d0ba99bc3212cdfce9d5f
3
  size 5176
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2f42f75a30929f9d62caf66cf4b79f256dcf12463dab5fb343f2449563a65432
3
  size 5176