harveysamson commited on
Commit
871f4fb
1 Parent(s): 21763f8

update README

Browse files
Files changed (2) hide show
  1. README.md +30 -1
  2. app.py +1 -1
README.md CHANGED
@@ -1,5 +1,5 @@
1
  ---
2
- title: Trial Space
3
  emoji: 🦀
4
  colorFrom: indigo
5
  colorTo: green
@@ -9,4 +9,33 @@ app_file: app.py
9
  pinned: false
10
  ---
11
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
12
  Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference
 
 
 
 
 
 
1
  ---
2
+ title: wav2vec2-speech-emotion-recognition
3
  emoji: 🦀
4
  colorFrom: indigo
5
  colorTo: green
 
9
  pinned: false
10
  ---
11
 
12
+ Wav2Vec2 For Speech Emotion Recognition
13
+
14
+ Emotion is an important aspect for the human nature, and understanding it is critical for catering to human services better in this era of digital communication, where speech has been transformed through texts and messages and calls. Speech Emotion Recognition creates a way to classify emotions embedded in speech through careful analysis of lexical, visual, and acoustic features.
15
+
16
+ Link to the main reference: https://github.com/m3hrdadfi/soxan
17
+
18
+ Evaluation Scores
19
+
20
+ Emotions precision recall f1-score accuracy
21
+ anger 0.82 1.00 0.81
22
+ disgust 0.85 0.96 0.85
23
+ fear 0.78 0.88 0.80
24
+ happiness 0.84 0.71 0.78
25
+ sadness 0.86 1.00 0.79
26
+ Overall Accuracy: 0.806 or 80.6%
27
+
28
+ The Wav2Vec2.0 is a pretrained model for Automatic Speech Recognition, and the Wav2Vec2 for Speech Recognition used is fine-tuned using Connectionist Temporal Classification or CTC, to train neural networks for sequential problems mainly including ASR.
29
+
30
+ Google Colab Link: https://colab.research.google.com/github/m3hrdadfi/soxan/blob/main/notebooks/Emotion_recognition_in_Greek_speech_using_Wav2Vec2.ipynb#scrollTo=y0xJwDkA3QQR
31
+
32
+ Competition board for Common Voice: https://paperswithcode.com/dataset/common-voice
33
+
34
+ ---
35
+
36
  Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference
37
+
38
+
39
+
40
+
41
+
app.py CHANGED
@@ -34,7 +34,7 @@ def inference(path):
34
  inputs = gr.inputs.Audio(label="Input Audio", type="filepath", source="microphone")
35
  outputs = gr.outputs.Label(type="confidences", label = "Output Scores")
36
  title = "Wav2Vec2 Speech Emotion Recognition"
37
- description = "This is a demo of the Wav2Vec2 Speech Emotion Recognition model. Upload an audio file and the top emotions predicted will be displayed."
38
  examples = ['data/heart.wav', 'data/happy26.wav', 'data/jm24.wav', 'data/newton.wav', 'data/speeding.wav']
39
  article = "<a href = 'https://github.com/m3hrdadfi/soxan'> Wav2Vec2 Speech Classification Github Repository"
40
 
 
34
  inputs = gr.inputs.Audio(label="Input Audio", type="filepath", source="microphone")
35
  outputs = gr.outputs.Label(type="confidences", label = "Output Scores")
36
  title = "Wav2Vec2 Speech Emotion Recognition"
37
+ description = "This is a demo of the Wav2Vec2 Speech Emotion Recognition model. Record an audio file and the top emotions predicted will be displayed."
38
  examples = ['data/heart.wav', 'data/happy26.wav', 'data/jm24.wav', 'data/newton.wav', 'data/speeding.wav']
39
  article = "<a href = 'https://github.com/m3hrdadfi/soxan'> Wav2Vec2 Speech Classification Github Repository"
40