DunnBC22 commited on
Commit
42bbb1d
1 Parent(s): c7a3986

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +12 -7
README.md CHANGED
@@ -6,14 +6,17 @@ datasets:
6
  - audiofolder
7
  metrics:
8
  - accuracy
 
 
 
9
  model-index:
10
  - name: wav2vec2-base-Toronto_emotional_speech_set
11
  results: []
 
 
 
12
  ---
13
 
14
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
- should probably proofread and complete it, then remove this comment. -->
16
-
17
  # wav2vec2-base-Toronto_emotional_speech_set
18
 
19
  This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the audiofolder dataset.
@@ -32,15 +35,17 @@ It achieves the following results on the evaluation set:
32
 
33
  ## Model description
34
 
35
- More information needed
 
 
36
 
37
  ## Intended uses & limitations
38
 
39
- More information needed
40
 
41
  ## Training and evaluation data
42
 
43
- More information needed
44
 
45
  ## Training procedure
46
 
@@ -84,4 +89,4 @@ The following hyperparameters were used during training:
84
  - Transformers 4.27.4
85
  - Pytorch 2.0.0
86
  - Datasets 2.11.0
87
- - Tokenizers 0.13.3
 
6
  - audiofolder
7
  metrics:
8
  - accuracy
9
+ - f1
10
+ - recall
11
+ - precision
12
  model-index:
13
  - name: wav2vec2-base-Toronto_emotional_speech_set
14
  results: []
15
+ language:
16
+ - en
17
+ pipeline_tag: audio-classification
18
  ---
19
 
 
 
 
20
  # wav2vec2-base-Toronto_emotional_speech_set
21
 
22
  This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the audiofolder dataset.
 
35
 
36
  ## Model description
37
 
38
+ This model classifies the emotion when someone speaks in audio sample.
39
+
40
+ For more information on how it was created, check out the following link: https://github.com/DunnBC22/Vision_Audio_and_Multimodal_Projects/blob/main/Audio-Projects/Emotion%20Detection/Toronto%20Emotional%20Speech%20Set%20(TESS)/Toronto%20Emotional%20Speech%20Set%20(TESS).ipynb
41
 
42
  ## Intended uses & limitations
43
 
44
+ This model is intended to demonstrate my ability to solve a complex problem using technology.
45
 
46
  ## Training and evaluation data
47
 
48
+ Dataset Source: https://www.kaggle.com/datasets/ejlok1/toronto-emotional-speech-set-tess
49
 
50
  ## Training procedure
51
 
 
89
  - Transformers 4.27.4
90
  - Pytorch 2.0.0
91
  - Datasets 2.11.0
92
+ - Tokenizers 0.13.3