Update README.md
Browse filesadded info about dataset
README.md
CHANGED
@@ -84,51 +84,97 @@ pretty_name: 13 Dimension Emotions Dataset
|
|
84 |
size_categories:
|
85 |
- 1K<n<10K
|
86 |
---
|
87 |
-
# Dataset Card for
|
88 |
|
89 |
-
|
90 |
|
91 |
-
|
92 |
|
93 |
## Dataset Details
|
94 |
|
95 |
-
### Dataset Sources
|
96 |
|
97 |
-
|
|
|
98 |
|
99 |
-
|
100 |
-
- **Paper [optional]:** [More Information Needed]
|
101 |
-
- **Demo [optional]:** [More Information Needed]
|
102 |
|
103 |
## Uses
|
104 |
|
105 |
-
<!-- Address questions around how the dataset is intended to be used. -->
|
106 |
-
|
107 |
### Direct Use
|
108 |
|
109 |
-
|
110 |
-
|
111 |
-
|
|
|
112 |
|
113 |
### Out-of-Scope Use
|
114 |
|
115 |
-
|
|
|
|
|
|
|
116 |
|
117 |
-
|
118 |
|
119 |
## Dataset Structure
|
120 |
|
121 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
122 |
|
123 |
-
|
124 |
|
125 |
## Dataset Creation
|
126 |
|
127 |
### Curation Rationale
|
128 |
|
129 |
-
|
|
|
|
|
|
|
130 |
|
131 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
132 |
|
133 |
### Source Data
|
134 |
|
|
|
84 |
size_categories:
|
85 |
- 1K<n<10K
|
86 |
---
|
87 |
+
# Dataset Card for **Music Emotion Ratings Across Cultures**
|
88 |
|
89 |
+
This dataset captures the mean emotional category ratings for 1,841 music samples based on subjective experiences reported by participants from the United States and China. The ratings were collected as part of the study to uncover the **universal and nuanced emotions** evoked by instrumental music.
|
90 |
|
91 |
+
---
|
92 |
|
93 |
## Dataset Details
|
94 |
|
95 |
+
### Dataset Sources
|
96 |
|
97 |
+
- **Paper**: [What music makes us feel: At least 13 dimensions organize subjective experiences associated with music across different cultures](https://www.pnas.org/cgi/doi/10.1073/pnas.1910704117)
|
98 |
+
- **Demo (Interactive Map)**: [Music Emotion Map](https://www.ocf.berkeley.edu/~acowen/music.html)
|
99 |
|
100 |
+
---
|
|
|
|
|
101 |
|
102 |
## Uses
|
103 |
|
|
|
|
|
104 |
### Direct Use
|
105 |
|
106 |
+
This dataset is designed for:
|
107 |
+
- **Music Emotion Classification**: Training multi-label classifiers for identifying emotions in music based on 13 universal categories.
|
108 |
+
- **Cross-Cultural Emotion Analysis**: Analyzing similarities and differences in emotional responses to music across cultures.
|
109 |
+
- **Emotion Visualization**: Creating high-dimensional visualizations of emotional distributions in music.
|
110 |
|
111 |
### Out-of-Scope Use
|
112 |
|
113 |
+
The dataset is **not suitable** for:
|
114 |
+
- Identifying lyrics-related emotions (as the music is instrumental).
|
115 |
+
- Cultural or genre-specific emotional predictions outside the U.S. and China.
|
116 |
+
- Misuse for building biased systems that assume emotional responses are fixed across all populations.
|
117 |
|
118 |
+
---
|
119 |
|
120 |
## Dataset Structure
|
121 |
|
122 |
+
### Data Fields
|
123 |
+
|
124 |
+
- **Sample ID**: Unique identifier for each of the 2,168 music clips.
|
125 |
+
- **Category Ratings**: Mean ratings for each of the 13 universal emotional categories:
|
126 |
+
- Joyful/Cheerful
|
127 |
+
- Calm/Relaxing
|
128 |
+
- Sad/Depressing
|
129 |
+
- Scary/Fearful
|
130 |
+
- Triumphant/Heroic
|
131 |
+
- Energizing/Pump-up
|
132 |
+
- Dreamy
|
133 |
+
- Romantic/Loving
|
134 |
+
- Amusing
|
135 |
+
- Exciting
|
136 |
+
- Compassionate/Sympathetic
|
137 |
+
- Awe-Inspiring
|
138 |
+
- Eerie/Mysterious
|
139 |
+
- **Valence**: Mean ratings for pleasantness (positive or negative feelings).
|
140 |
+
- **Arousal**: Mean ratings for energy levels (calm or excited feelings).
|
141 |
+
|
142 |
+
### Splits
|
143 |
+
|
144 |
+
The dataset does not use predefined splits but can be segmented based on:
|
145 |
+
- **Cultural groups**: U.S. vs. China.
|
146 |
+
- **Emotional dimensions**: Individual emotional categories or broad features like valence/arousal.
|
147 |
|
148 |
+
---
|
149 |
|
150 |
## Dataset Creation
|
151 |
|
152 |
### Curation Rationale
|
153 |
|
154 |
+
The dataset was created to:
|
155 |
+
- **Map Universal Emotions in Music**: Investigate whether emotional experiences evoked by music are universal across cultures.
|
156 |
+
- **Broaden Emotional Taxonomies**: Move beyond traditional models that use only 6 emotions or simple valence/arousal dimensions.
|
157 |
+
- **Enable Nuanced Emotional Understanding**: Provide a high-dimensional framework for understanding and classifying emotional responses to music.
|
158 |
|
159 |
+
---
|
160 |
+
|
161 |
+
### Source Data
|
162 |
+
|
163 |
+
- **Original Sources**: Instrumental music samples (5 seconds each) were contributed by participants to represent specific emotional categories.
|
164 |
+
- **Annotations**: Ratings collected through large-scale crowdsourcing from 1,591 U.S. and 1,258 Chinese participants.
|
165 |
+
|
166 |
+
---
|
167 |
+
|
168 |
+
## License
|
169 |
+
|
170 |
+
[More Information Needed]
|
171 |
+
|
172 |
+
---
|
173 |
+
|
174 |
+
## Citation
|
175 |
+
|
176 |
+
If you use this dataset, please cite the following paper:
|
177 |
+
Cowen, A. S., Fang, X., Sauter, D., & Keltner, D. (2020). What music makes us feel: At least 13 dimensions organize subjective experiences associated with music across different cultures. PNAS, 117(4), 1924-1934. https://doi.org/10.1073/pnas.1910704117
|
178 |
|
179 |
### Source Data
|
180 |
|