Yassmen commited on
Commit
6b297fc
·
verified ·
1 Parent(s): 8da752d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -17,7 +17,7 @@ should probably proofread and complete it, then remove this comment. -->
17
  [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/yassmenyoussef55-arete-global/huggingface/runs/z6qr57oj)
18
  # mixed_model_finetuned_cremad
19
 
20
- This model was trained from scratch on [CremaD dataset](https://github.com/CheyneyComputerScience/CREMA-D). dataset, which comprises 7442 recordings of actors expressing six different emotions in English. To create this multimodal model, we employed wav2vec2, pretrained on audio data, and resnet3d, pretrained on video data
21
 
22
  This dataset provides 7442 samples of recordings from actors performing on 6 different emotions in English, which are:
23
 
 
17
  [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/yassmenyoussef55-arete-global/huggingface/runs/z6qr57oj)
18
  # mixed_model_finetuned_cremad
19
 
20
+ This model was trained from scratch on [CremaD dataset](https://github.com/CheyneyComputerScience/CREMA-D) , To create this multimodal model, we employed wav2vec2, pretrained on audio data, and resnet3d, pretrained on video data
21
 
22
  This dataset provides 7442 samples of recordings from actors performing on 6 different emotions in English, which are:
23