baobaoh's picture
Update README.md
c1bb188 verified
metadata
dataset_info:
  features:
    - name: audio
      dtype: audio
    - name: amusing
      dtype: float64
    - name: angry
      dtype: float64
    - name: annoying
      dtype: float64
    - name: anxious/tense
      dtype: float64
    - name: awe-inspiring/amazing
      dtype: float64
    - name: beautiful
      dtype: float64
    - name: bittersweet
      dtype: float64
    - name: calm/relaxing/serene
      dtype: float64
    - name: compassionate/sympathetic
      dtype: float64
    - name: dreamy
      dtype: float64
    - name: eerie/mysterious
      dtype: float64
    - name: energizing/pump-up
      dtype: float64
    - name: entrancing
      dtype: float64
    - name: erotic/desirous
      dtype: float64
    - name: euphoric/ecstatic
      dtype: float64
    - name: exciting
      dtype: float64
    - name: goose bumps
      dtype: float64
    - name: indignant/defiant
      dtype: float64
    - name: joyful/cheerful
      dtype: float64
    - name: nauseating/revolting
      dtype: float64
    - name: painful
      dtype: float64
    - name: proud/strong
      dtype: float64
    - name: romantic/loving
      dtype: float64
    - name: sad/depressing
      dtype: float64
    - name: scary/fearful
      dtype: float64
    - name: tender/longing
      dtype: float64
    - name: transcendent/mystical
      dtype: float64
    - name: triumphant/heroic
      dtype: float64
  splits:
    - name: train
      num_bytes: 166110026.787
      num_examples: 1841
  download_size: 159674012
  dataset_size: 166110026.787
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
task_categories:
  - feature-extraction
  - audio-classification
tags:
  - Music
  - Emotion
  - Recognition
  - MERT
  - Dataset
  - Audio
pretty_name: 13 Dimension Emotions Dataset
size_categories:
  - 1K<n<10K

Dataset Card for Music Emotion Ratings Across Cultures

This dataset captures the mean emotional category ratings for 1,841 music samples based on subjective experiences reported by participants from the United States and China. The ratings were collected as part of the study to uncover the universal and nuanced emotions evoked by instrumental music.


Dataset Details

Dataset Sources


Uses

Direct Use

This dataset is designed for:

  • Music Emotion Classification: Training multi-label classifiers for identifying emotions in music based on 13 universal categories.
  • Cross-Cultural Emotion Analysis: Analyzing similarities and differences in emotional responses to music across cultures.
  • Emotion Visualization: Creating high-dimensional visualizations of emotional distributions in music.

Out-of-Scope Use

The dataset is not suitable for:

  • Identifying lyrics-related emotions (as the music is instrumental).
  • Cultural or genre-specific emotional predictions outside the U.S. and China.
  • Misuse for building biased systems that assume emotional responses are fixed across all populations.

Dataset Structure

Data Fields

  • Sample ID: Unique identifier for each of the 2,168 music clips.
  • Category Ratings: Mean ratings for each of the 13 universal emotional categories:
    • Joyful/Cheerful
    • Calm/Relaxing
    • Sad/Depressing
    • Scary/Fearful
    • Triumphant/Heroic
    • Energizing/Pump-up
    • Dreamy
    • Romantic/Loving
    • Amusing
    • Exciting
    • Compassionate/Sympathetic
    • Awe-Inspiring
    • Eerie/Mysterious
  • Valence: Mean ratings for pleasantness (positive or negative feelings).
  • Arousal: Mean ratings for energy levels (calm or excited feelings).

Splits

The dataset does not use predefined splits but can be segmented based on:

  • Cultural groups: U.S. vs. China.
  • Emotional dimensions: Individual emotional categories or broad features like valence/arousal.

Dataset Creation

Curation Rationale

The dataset was created to:

  • Map Universal Emotions in Music: Investigate whether emotional experiences evoked by music are universal across cultures.
  • Broaden Emotional Taxonomies: Move beyond traditional models that use only 6 emotions or simple valence/arousal dimensions.
  • Enable Nuanced Emotional Understanding: Provide a high-dimensional framework for understanding and classifying emotional responses to music.

Source Data

  • Original Sources: Instrumental music samples (5 seconds each) were contributed by participants to represent specific emotional categories.
  • Annotations: Ratings collected through large-scale crowdsourcing from 1,591 U.S. and 1,258 Chinese participants.

License

[More Information Needed]


Citation

If you use this dataset, please cite the following paper: Cowen, A. S., Fang, X., Sauter, D., & Keltner, D. (2020). What music makes us feel: At least 13 dimensions organize subjective experiences associated with music across different cultures. PNAS, 117(4), 1924-1934. https://doi.org/10.1073/pnas.1910704117

Source Data

Data Collection and Processing

[More Information Needed]

Who are the source data producers?

[More Information Needed]

Annotations [optional]

Annotation process

[More Information Needed]

Who are the annotators?

[More Information Needed]

Personal and Sensitive Information

[More Information Needed]

Bias, Risks, and Limitations

[More Information Needed]

Recommendations

Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.

Citation [optional]

BibTeX:

[More Information Needed]

APA:

[More Information Needed]

Glossary [optional]

[More Information Needed]

More Information [optional]

[More Information Needed]

Dataset Card Authors [optional]

[More Information Needed]

Dataset Card Contact

[More Information Needed]