File size: 2,771 Bytes
a87a86d 29d9f40 a87a86d caa01d8 3676928 caa01d8 676f491 caa01d8 676f491 caa01d8 676f491 caa01d8 676f491 caa01d8 676f491 caa01d8 053853b 29d9f40 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 |
---
license: cc-by-4.0
task_categories:
- audio-classification
---
# Acted Emotional Speech Dynamic Database v1.0
## ABOUT
AESDD v1.0 was created on October 2017 in the Laboratory of Electronic Media, School of
Journalism and Mass Communications, Aristotle University of Thessaloniki, for
the needs of Speech Emotion Recognition research of the Multidisciplinary Media &
Mediated Communication Research Group (M3C, http://m3c.web.auth.gr/).
It is a collection of utterances of emotional speech acted by professional actors.
This version is the initial state of AESDD. The purpose of this project the continuous
growth of the database through the collaborative effort of the M3C research group and
theatrical teams.
## CREATION OF THE DATABASE
For the creation of v.1 of the database, 5 (3 female and 2 male) professional actors were
recorded. 19 utterances of ambiguous out of context emotional content were chosen. The
actors acted these 19 utterances in every one of the 5 chosen emotions. One extra improvised
utterance was added for every actor and emotion. The guidance of the actors and the choice
of the final recordings were supervised by a scientific expert in dramatology.
For some of the utterances, more that one takes were qualified.
Consequently, around 500 utterances occured in the final database.
UPDATE: Since the AESDD is dynamic by definition, more actors have been recorded and added,
following the same naming scheme as described in the Section "ORGANISING THE DATABASE"
## CHOSEN EMOTIONS
Five emotions were chosen:
1. a (anger)
2. d (disgust)
3. f (fear)
4. h (happiness)
5. s (sadness)
## ORGANISING THE DATABASE
There are five folders, named after the five emotion classes.
Every file name in the databased is in the following form: xAA (B)
where
- x is the first letter of the emotion (a--> anger, h--> happiness etc.)
- AA is the number of the utterance (01,02...20)
- B is the number of the speaker (1 --> 1st speaker, 2 --> 2nd speaker etc)
e.g. 'a03 (4).wav' is the 3rd utterance spoken by the 4th speaker with anger
In the case where two takes were qualified for the same utterance, they are distinguished
with a lower case letter.
e.g. 'f18 (5).wav' and 'f18 (5)b.wav' are two different versions of the 5th actor saying the
18th utterance with fear.
## References
1. Vryzas, N., Kotsakis, R., Liatsou, A., Dimoulas, C. A., & Kalliris, G. (2018). Speech emotion recognition for performance interaction. Journal of the Audio Engineering Society, 66(6), 457-467.
2. Vryzas, N., Matsiola, M., Kotsakis, R., Dimoulas, C., & Kalliris, G. (2018, September). Subjective Evaluation of a Speech Emotion Recognition Interaction Framework. In Proceedings of the Audio Mostly 2018 on Sound in Immersion and Emotion (p. 34). ACM. |