Commit
•
ad29da2
1
Parent(s):
40c06f7
Update README.md
Browse files
README.md
CHANGED
@@ -1,6 +1,6 @@
|
|
1 |
---
|
2 |
license: apache-2.0
|
3 |
-
base_model: hughlan1214/SER_wav2vec2-large-xlsr-
|
4 |
tags:
|
5 |
- generated_from_trainer
|
6 |
metrics:
|
@@ -18,8 +18,11 @@ should probably proofread and complete it, then remove this comment. -->
|
|
18 |
|
19 |
# SER_wav2vec2-large-xlsr-53_240304_fin-tuned_2
|
20 |
|
21 |
-
This model is a fine-tuned version of [hughlan1214/Speech_Emotion_Recognition_wav2vec2-large-xlsr-53_240304_SER_fin-tuned2.0](https://huggingface.co/hughlan1214/Speech_Emotion_Recognition_wav2vec2-large-xlsr-53_240304_SER_fin-tuned2.0) on an
|
22 |
-
|
|
|
|
|
|
|
23 |
- Loss: 1.0601
|
24 |
- Accuracy: 0.6731
|
25 |
- Precision: 0.6761
|
@@ -28,7 +31,12 @@ It achieves the following results on the evaluation set:
|
|
28 |
|
29 |
## Model description
|
30 |
|
31 |
-
|
|
|
|
|
|
|
|
|
|
|
32 |
|
33 |
## Intended uses & limitations
|
34 |
|
@@ -36,7 +44,7 @@ More information needed
|
|
36 |
|
37 |
## Training and evaluation data
|
38 |
|
39 |
-
|
40 |
|
41 |
## Training procedure
|
42 |
|
|
|
1 |
---
|
2 |
license: apache-2.0
|
3 |
+
base_model: hughlan1214/SER_wav2vec2-large-xlsr-53_240304_fine-tuned1.1
|
4 |
tags:
|
5 |
- generated_from_trainer
|
6 |
metrics:
|
|
|
18 |
|
19 |
# SER_wav2vec2-large-xlsr-53_240304_fin-tuned_2
|
20 |
|
21 |
+
This model is a fine-tuned version of [hughlan1214/Speech_Emotion_Recognition_wav2vec2-large-xlsr-53_240304_SER_fin-tuned2.0](https://huggingface.co/hughlan1214/Speech_Emotion_Recognition_wav2vec2-large-xlsr-53_240304_SER_fin-tuned2.0) on an [Speech Emotion Recognition (en)](https://www.kaggle.com/datasets/dmitrybabko/speech-emotion-recognition-en) dataset.
|
22 |
+
|
23 |
+
This dataset includes the 4 most popular datasets in English: Crema, Ravdess, Savee, and Tess, containing a total of over 12,000 .wav audio files. Each of these four datasets includes 6 to 8 different emotional labels.
|
24 |
+
|
25 |
+
This achieves the following results on the evaluation set:
|
26 |
- Loss: 1.0601
|
27 |
- Accuracy: 0.6731
|
28 |
- Precision: 0.6761
|
|
|
31 |
|
32 |
## Model description
|
33 |
|
34 |
+
The model was obtained through feature extraction using [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) and underwent several rounds of fine-tuning. It predicts the 7 types of emotions contained in speech, aiming to lay the foundation for subsequent use of human micro-expressions on the visual level and context semantics under LLMS to infer user emotions in real-time.
|
35 |
+
|
36 |
+
Although the model was trained on purely English datasets, post-release testing showed that it also performs well in predicting emotions in Chinese and French, demonstrating the powerful cross-linguistic capability of the [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) pre-trained model.
|
37 |
+
|
38 |
+
emotions = ['angry', 'disgust', 'fear', 'happy', 'neutral', 'sad', 'surprise']
|
39 |
+
|
40 |
|
41 |
## Intended uses & limitations
|
42 |
|
|
|
44 |
|
45 |
## Training and evaluation data
|
46 |
|
47 |
+
70/30 of entire dataset.
|
48 |
|
49 |
## Training procedure
|
50 |
|