DunnBC22 commited on
Commit
5fb8a26
1 Parent(s): ef11a50

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +12 -7
README.md CHANGED
@@ -6,14 +6,17 @@ datasets:
6
  - audiofolder
7
  metrics:
8
  - accuracy
 
 
 
9
  model-index:
10
  - name: wav2vec2-base-Speech_Emotion_Recognition
11
  results: []
 
 
 
12
  ---
13
 
14
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
- should probably proofread and complete it, then remove this comment. -->
16
-
17
  # wav2vec2-base-Speech_Emotion_Recognition
18
 
19
  This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the audiofolder dataset.
@@ -32,15 +35,17 @@ It achieves the following results on the evaluation set:
32
 
33
  ## Model description
34
 
35
- More information needed
 
 
36
 
37
  ## Intended uses & limitations
38
 
39
- More information needed
40
 
41
  ## Training and evaluation data
42
 
43
- More information needed
44
 
45
  ## Training procedure
46
 
@@ -79,4 +84,4 @@ The following hyperparameters were used during training:
79
  - Transformers 4.26.1
80
  - Pytorch 2.0.0+cu118
81
  - Datasets 2.11.0
82
- - Tokenizers 0.13.3
 
6
  - audiofolder
7
  metrics:
8
  - accuracy
9
+ - f1
10
+ - recall
11
+ - precision
12
  model-index:
13
  - name: wav2vec2-base-Speech_Emotion_Recognition
14
  results: []
15
+ language:
16
+ - en
17
+ pipeline_tag: audio-classification
18
  ---
19
 
 
 
 
20
  # wav2vec2-base-Speech_Emotion_Recognition
21
 
22
  This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the audiofolder dataset.
 
35
 
36
  ## Model description
37
 
38
+ This model predicts the emotion of the person speaking in the audio sample.
39
+
40
+ For more information on how it was created, check out the following link: https://github.com/DunnBC22/Vision_Audio_and_Multimodal_Projects/tree/main/Audio-Projects/Emotion%20Detection/Speech%20Emotion%20Detection
41
 
42
  ## Intended uses & limitations
43
 
44
+ This model is intended to demonstrate my ability to solve a complex problem using technology.
45
 
46
  ## Training and evaluation data
47
 
48
+ Dataset Source: https://www.kaggle.com/datasets/dmitrybabko/speech-emotion-recognition-en
49
 
50
  ## Training procedure
51
 
 
84
  - Transformers 4.26.1
85
  - Pytorch 2.0.0+cu118
86
  - Datasets 2.11.0
87
+ - Tokenizers 0.13.3