admin commited on
Commit
a78bf79
1 Parent(s): e988066
Files changed (3) hide show
  1. .gitignore +1 -0
  2. README.md +235 -1
  3. music_genre.py +222 -0
.gitignore ADDED
@@ -0,0 +1 @@
 
 
1
+ rename.sh
README.md CHANGED
@@ -1,3 +1,237 @@
1
  ---
2
- license: mit
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license: cc-by-nc-nd-4.0
3
+ task_categories:
4
+ - audio-classification
5
+ - image-classification
6
+ language:
7
+ - zh
8
+ - en
9
+ tags:
10
+ - music
11
+ - art
12
+ pretty_name: Music Genre Dataset
13
+ size_categories:
14
+ - 10K<n<100K
15
+ viewer: false
16
  ---
17
+
18
+ # Dataset Card for Music Genre
19
+ The Default dataset comprises approximately 1,700 musical pieces in .mp3 format, sourced from the NetEase music. The lengths of these pieces range from 270 to 300 seconds. All are sampled at the rate of 22,050 Hz. As the website providing the audio music includes style labels for the downloaded music, there are no specific annotators involved. Validation is achieved concurrently with the downloading process. They are categorized into a total of 16 genres.
20
+
21
+ ## Viewer
22
+ <https://www.modelscope.cn/datasets/ccmusic-database/music_genre/dataPeview>
23
+
24
+ ## Dataset Structure
25
+ <style>
26
+ .genres td {
27
+ vertical-align: middle !important;
28
+ text-align: center;
29
+ }
30
+ .genres th {
31
+ text-align: center;
32
+ }
33
+ </style>
34
+
35
+ ### Default Subset
36
+ <table class="genres">
37
+ <tr>
38
+ <th>audio</th>
39
+ <th>mel (spectrogram)</th>
40
+ <th>fst_level_label (2-class)</th>
41
+ <th>sec_level_label (9-class)</th>
42
+ <th>thr_level_label (16-class)</th>
43
+ </tr>
44
+ <tr>
45
+ <td>.wav, 22050Hz</td>
46
+ <td>.jpg, 22050Hz</td>
47
+ <td>1_Classic / 2_Non_classic</td>
48
+ <td>3_Symphony / 4_Opera / 5_Solo / 6_Chamber / 7_Pop / 8_Dance_and_house / 9_Indie / 10_Soul_or_r_and_b / 11_Rock</td>
49
+ <td>3_Symphony / 4_Opera / 5_Solo / 6_Chamber / 12_Pop_vocal_ballad / 13_Adult_contemporary / 14_Teen_pop / 15_Contemporary_dance_pop / 16_Dance_pop / 17_Classic_indie_pop / 18_Chamber_cabaret_and_art_pop / 10_Soul_or_r_and_b / 19_Adult_alternative_rock / 20_Uplifting_anthemic_rock / 21_Soft_rock / 22_Acoustic_pop</td>
50
+ </tr>
51
+ <tr>
52
+ <td>...</td>
53
+ <td>...</td>
54
+ <td>...</td>
55
+ <td>...</td>
56
+ <td>...</td>
57
+ </tr>
58
+ </table>
59
+
60
+ ### Eval Subset
61
+ <table class="genres">
62
+ <tr>
63
+ <th>mel</th>
64
+ <th>cqt</th>
65
+ <th>chroma</th>
66
+ <th>fst_level_label (2-class)</th>
67
+ <th>sec_level_label (9-class)</th>
68
+ <th>thr_level_label (16-class)</th>
69
+ </tr>
70
+ <tr>
71
+ <td>.jpg, 11.4s, 48000Hz</td>
72
+ <td>.jpg, 11.4s, 48000Hz</td>
73
+ <td>.jpg, 11.4s, 48000Hz</td>
74
+ <td>1_Classic / 2_Non_classic</td>
75
+ <td>3_Symphony / 4_Opera / 5_Solo / 6_Chamber / 7_Pop / 8_Dance_and_house / 9_Indie / 10_Soul_or_r_and_b / 11_Rock</td>
76
+ <td>3_Symphony / 4_Opera / 5_Solo / 6_Chamber / 12_Pop_vocal_ballad / 13_Adult_contemporary / 14_Teen_pop / 15_Contemporary_dance_pop / 16_Dance_pop / 17_Classic_indie_pop / 18_Chamber_cabaret_and_art_pop / 10_Soul_or_r_and_b / 19_Adult_alternative_rock / 20_Uplifting_anthemic_rock / 21_Soft_rock / 22_Acoustic_pop</td>
77
+ </tr>
78
+ <tr>
79
+ <td>...</td>
80
+ <td>...</td>
81
+ <td>...</td>
82
+ <td>...</td>
83
+ <td>...</td>
84
+ <td>...</td>
85
+ </tr>
86
+ </table>
87
+
88
+ ### Data Instances
89
+ .zip(.jpg)
90
+ <img src="./data/labelv.png">
91
+
92
+ ### Data Fields
93
+ ```
94
+ 1_Classic
95
+ 3_Symphony
96
+ 4_Opera
97
+ 5_Solo
98
+ 6_Chamber
99
+
100
+ 2_Non_classic
101
+ 7_Pop
102
+ 12_Pop_vocal_ballad
103
+ 13_Adult_contemporary
104
+ 14_Teen_pop
105
+
106
+ 8_Dance_and_house
107
+ 15_Contemporary_dance_pop
108
+ 16_Dance_pop
109
+
110
+ 9_Indie
111
+ 17_Classic_indie_pop
112
+ 18_Chamber_cabaret_and_art_pop
113
+
114
+ 10_Soul_or_RnB
115
+
116
+ 11_Rock
117
+ 19_Adult_alternative_rock
118
+ 20_Uplifting_anthemic_rock
119
+ 21_Soft_rock
120
+ 22_Acoustic_pop
121
+ ```
122
+ <img src="https://www.modelscope.cn/api/v1/datasets/ccmusic-database/music_genre/repo?Revision=master&FilePath=.%2Fdata%2Fgenre.png&View=true">
123
+
124
+ ### Data Splits
125
+ | Split | Default | Eval |
126
+ | :-------------: | :-----: | :---: |
127
+ | total | 1713 | 36375 |
128
+ | train(80%) | 1370 | 29100 |
129
+ | validation(10%) | 171 | 3637 |
130
+ | test(10%) | 172 | 3638 |
131
+
132
+ ## Dataset Description
133
+ - **Homepage:** <https://ccmusic-database.github.io>
134
+ - **Repository:** <https://huggingface.co/datasets/ccmusic-database/music_genre>
135
+ - **Paper:** <https://doi.org/10.5281/zenodo.5676893>
136
+ - **Leaderboard:** <https://www.modelscope.cn/datasets/ccmusic-database/music_genre>
137
+ - **Point of Contact:** <https://huggingface.co/ccmusic-database/music_genre>
138
+
139
+ ### Dataset Summary
140
+ This database contains about 1700 musical pieces (.mp3 format) with lengths of 270-300s that are divided into 17 genres in total.
141
+
142
+ ### Supported Tasks and Leaderboards
143
+ Audio classification
144
+
145
+ ### Languages
146
+ Multilingual
147
+
148
+ ## Maintenance
149
+ ```bash
150
+ GIT_LFS_SKIP_SMUDGE=1 git clone git@hf.co:datasets/ccmusic-database/music_genre
151
+ cd music_genre
152
+ ```
153
+
154
+ ## Usage
155
+ ### Default Subset
156
+ ```python
157
+ from datasets import load_dataset
158
+
159
+ dataset = load_dataset("ccmusic-database/music_genre", name="default")
160
+ for item in ds["train"]:
161
+ print(item)
162
+
163
+ for item in ds["validation"]:
164
+ print(item)
165
+
166
+ for item in ds["test"]:
167
+ print(item)
168
+ ```
169
+
170
+ ### Eval Subset
171
+ ```python
172
+ from datasets import load_dataset
173
+
174
+ dataset = load_dataset("ccmusic-database/music_genre", name="eval")
175
+ for item in ds["train"]:
176
+ print(item)
177
+
178
+ for item in ds["validation"]:
179
+ print(item)
180
+
181
+ for item in ds["test"]:
182
+ print(item)
183
+ ```
184
+
185
+ ## Dataset Creation
186
+ ### Curation Rationale
187
+ Promoting the development of AI in the music industry
188
+
189
+ ### Source Data
190
+ #### Initial Data Collection and Normalization
191
+ Zhaorui Liu, Monan Zhou
192
+
193
+ #### Who are the source language producers?
194
+ Composers of the songs in the dataset
195
+
196
+ ### Annotations
197
+ #### Annotation process
198
+ Students collected about 1700 musical pieces (.mp3 format) with lengths of 270-300s divided into 17 genres in total.
199
+
200
+ #### Who are the annotators?
201
+ Students from CCMUSIC
202
+
203
+ ### Personal and Sensitive Information
204
+ Due to copyright issues with the original music, only spectrograms are provided in the dataset.
205
+
206
+ ## Considerations for Using the Data
207
+ ### Social Impact of Dataset
208
+ Promoting the development of AI in the music industry
209
+
210
+ ### Discussion of Biases
211
+ Most are English songs
212
+
213
+ ### Other Known Limitations
214
+ Samples are not balanced enough
215
+
216
+ ## Additional Information
217
+ ### Dataset Curators
218
+ Zijin Li
219
+
220
+ ### Evaluation
221
+ <https://huggingface.co/ccmusic-database/music_genre>
222
+
223
+ ### Citation Information
224
+ ```bibtex
225
+ @dataset{zhaorui_liu_2021_5676893,
226
+ author = {Monan Zhou, Shenyang Xu, Zhaorui Liu, Zhaowen Wang, Feng Yu, Wei Li and Baoqiang Han},
227
+ title = {CCMusic: an Open and Diverse Database for Chinese and General Music Information Retrieval Research},
228
+ month = {mar},
229
+ year = {2024},
230
+ publisher = {HuggingFace},
231
+ version = {1.2},
232
+ url = {https://huggingface.co/ccmusic-database}
233
+ }
234
+ ```
235
+
236
+ ### Contributions
237
+ Provide a dataset for music genre classification
music_genre.py ADDED
@@ -0,0 +1,222 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import random
3
+ import datasets
4
+ from datasets.tasks import ImageClassification
5
+
6
+ _NAMES_1 = {
7
+ 1: "Classic",
8
+ 2: "Non_classic",
9
+ }
10
+
11
+ _NAMES_2 = {
12
+ 3: "Symphony",
13
+ 4: "Opera",
14
+ 5: "Solo",
15
+ 6: "Chamber",
16
+ 7: "Pop",
17
+ 8: "Dance_and_house",
18
+ 9: "Indie",
19
+ 10: "Soul_or_RnB",
20
+ 11: "Rock",
21
+ }
22
+
23
+ _NAMES_3 = {
24
+ 3: "Symphony",
25
+ 4: "Opera",
26
+ 5: "Solo",
27
+ 6: "Chamber",
28
+ 12: "Pop_vocal_ballad",
29
+ 13: "Adult_contemporary",
30
+ 14: "Teen_pop",
31
+ 15: "Contemporary_dance_pop",
32
+ 16: "Dance_pop",
33
+ 17: "Classic_indie_pop",
34
+ 18: "Chamber_cabaret_and_art_pop",
35
+ 10: "Soul_or_RnB",
36
+ 19: "Adult_alternative_rock",
37
+ 20: "Uplifting_anthemic_rock",
38
+ 21: "Soft_rock",
39
+ 22: "Acoustic_pop",
40
+ }
41
+
42
+ _DBNAME = os.path.basename(__file__).split(".")[0]
43
+
44
+ _HOMEPAGE = f"https://www.modelscope.cn/datasets/ccmusic-database/{_DBNAME}"
45
+
46
+ _DOMAIN = f"https://www.modelscope.cn/api/v1/datasets/ccmusic-database/{_DBNAME}/repo?Revision=master&FilePath=data"
47
+
48
+ _CITATION = """\
49
+ @dataset{zhaorui_liu_2021_5676893,
50
+ author = {Monan Zhou, Shenyang Xu, Zhaorui Liu, Zhaowen Wang, Feng Yu, Wei Li and Baoqiang Han},
51
+ title = {CCMusic: an Open and Diverse Database for Chinese and General Music Information Retrieval Research},
52
+ month = {mar},
53
+ year = {2024},
54
+ publisher = {HuggingFace},
55
+ version = {1.2},
56
+ url = {https://huggingface.co/ccmusic-database}
57
+ }
58
+ """
59
+
60
+ _DESCRIPTION = """\
61
+ The raw dataset comprises approximately 1,700 musical pieces in .mp3 format, sourced from the NetEase music. The lengths of these pieces range from 270 to 300 seconds. All are sampled at the rate of 48,000 Hz. As the website providing the audio music includes style labels for the downloaded music, there are no specific annotators involved. Validation is achieved concurrently with the downloading process. They are categorized into a total of 16 genres.
62
+
63
+ For the pre-processed version, audio is cut into an 11.4-second segment, resulting in 36,375 files, which are then transformed into Mel, CQT and Chroma. In the end, the data entry has six columns: the first three columns represent the Mel, CQT, and Chroma spectrogram slices in .jpg format, respectively, while the last three columns contain the labels for the three levels. The first level comprises two categories, the second level consists of nine categories, and the third level encompasses 16 categories. The entire dataset is shuffled and split into training, validation, and test sets in a ratio of 8:1:1. This dataset can be used for genre classification.
64
+ """
65
+
66
+ _URLS = {
67
+ "audio": f"{_DOMAIN}/audio.zip",
68
+ "mel": f"{_DOMAIN}/mel.zip",
69
+ "eval": f"{_DOMAIN}/eval.zip",
70
+ }
71
+
72
+
73
+ class music_genre(datasets.GeneratorBasedBuilder):
74
+ # BUILDER_CONFIGS = [
75
+ # datasets.BuilderConfig(name="default"),
76
+ # datasets.BuilderConfig(name="eval"),
77
+ # ]
78
+
79
+ def _info(self):
80
+ return datasets.DatasetInfo(
81
+ features=(
82
+ datasets.Features(
83
+ {
84
+ "audio": datasets.Audio(sampling_rate=22050),
85
+ "mel": datasets.Image(),
86
+ "fst_level_label": datasets.features.ClassLabel(
87
+ names=list(_NAMES_1.values())
88
+ ),
89
+ "sec_level_label": datasets.features.ClassLabel(
90
+ names=list(_NAMES_2.values())
91
+ ),
92
+ "thr_level_label": datasets.features.ClassLabel(
93
+ names=list(_NAMES_3.values())
94
+ ),
95
+ }
96
+ )
97
+ if self.config.name == "raw"
98
+ else datasets.Features(
99
+ {
100
+ "mel": datasets.Image(),
101
+ "cqt": datasets.Image(),
102
+ "chroma": datasets.Image(),
103
+ "fst_level_label": datasets.features.ClassLabel(
104
+ names=list(_NAMES_1.values())
105
+ ),
106
+ "sec_level_label": datasets.features.ClassLabel(
107
+ names=list(_NAMES_2.values())
108
+ ),
109
+ "thr_level_label": datasets.features.ClassLabel(
110
+ names=list(_NAMES_3.values())
111
+ ),
112
+ }
113
+ )
114
+ ),
115
+ supervised_keys=("mel", "sec_level_label"),
116
+ homepage=_HOMEPAGE,
117
+ license="CC-BY-NC-ND",
118
+ version="1.2.0",
119
+ citation=_CITATION,
120
+ description=_DESCRIPTION,
121
+ task_templates=[
122
+ ImageClassification(
123
+ task="image-classification",
124
+ image_column="mel",
125
+ label_column="sec_level_label",
126
+ )
127
+ ],
128
+ )
129
+
130
+ def _split_generators(self, dl_manager):
131
+ dataset = []
132
+ if self.config.name == "raw":
133
+ files = {}
134
+ audio_files = dl_manager.download_and_extract(_URLS["audio"])
135
+ mel_files = dl_manager.download_and_extract(_URLS["mel"])
136
+ for path in dl_manager.iter_files([audio_files]):
137
+ fname: str = os.path.basename(path)
138
+ if fname.endswith(".mp3"):
139
+ files[fname.split(".mp")[0]] = {"audio": path}
140
+
141
+ for path in dl_manager.iter_files([mel_files]):
142
+ fname = os.path.basename(path)
143
+ if fname.endswith(".jpg"):
144
+ files[fname.split(".jp")[0]]["mel"] = path
145
+
146
+ dataset = list(files.values())
147
+
148
+ else:
149
+ data_files = dl_manager.download_and_extract(_URLS["eval"])
150
+ for path in dl_manager.iter_files([data_files]):
151
+ if os.path.basename(path).endswith(".jpg") and "mel" in path:
152
+ dataset.append(
153
+ {
154
+ "mel": path,
155
+ "cqt": path.replace("\\mel\\", "\\cqt\\").replace(
156
+ "/mel/", "/cqt/"
157
+ ),
158
+ "chroma": path.replace("\\mel\\", "\\chroma\\").replace(
159
+ "/mel/", "/chroma/"
160
+ ),
161
+ }
162
+ )
163
+
164
+ random.shuffle(dataset)
165
+ data_count = len(dataset)
166
+ p80 = int(data_count * 0.8)
167
+ p90 = int(data_count * 0.9)
168
+
169
+ return [
170
+ datasets.SplitGenerator(
171
+ name=datasets.Split.TRAIN,
172
+ gen_kwargs={"files": dataset[:p80]},
173
+ ),
174
+ datasets.SplitGenerator(
175
+ name=datasets.Split.VALIDATION,
176
+ gen_kwargs={"files": dataset[p80:p90]},
177
+ ),
178
+ datasets.SplitGenerator(
179
+ name=datasets.Split.TEST,
180
+ gen_kwargs={"files": dataset[p90:]},
181
+ ),
182
+ ]
183
+
184
+ def _calc_label(self, path, depth, substr="/mel/"):
185
+ spect = substr
186
+ dirpath: str = os.path.dirname(path)
187
+ substr_index = dirpath.find(spect)
188
+ if substr_index < 0:
189
+ spect = spect.replace("/", "\\")
190
+ substr_index = dirpath.find(spect)
191
+
192
+ labstr = dirpath[substr_index + len(spect) :]
193
+ labs = labstr.split("/")
194
+ if len(labs) < 2:
195
+ labs = labstr.split("\\")
196
+
197
+ if depth <= len(labs):
198
+ return int(labs[depth - 1].split("_")[0])
199
+ else:
200
+ return int(labs[-1].split("_")[0])
201
+
202
+ def _generate_examples(self, files):
203
+ if self.config.name == "raw":
204
+ for i, path in enumerate(files):
205
+ yield i, {
206
+ "audio": path["audio"],
207
+ "mel": path["mel"],
208
+ "fst_level_label": _NAMES_1[self._calc_label(path["mel"], 1)],
209
+ "sec_level_label": _NAMES_2[self._calc_label(path["mel"], 2)],
210
+ "thr_level_label": _NAMES_3[self._calc_label(path["mel"], 3)],
211
+ }
212
+
213
+ else:
214
+ for i, path in enumerate(files):
215
+ yield i, {
216
+ "mel": path["mel"],
217
+ "cqt": path["cqt"],
218
+ "chroma": path["chroma"],
219
+ "fst_level_label": _NAMES_1[self._calc_label(path["mel"], 1)],
220
+ "sec_level_label": _NAMES_2[self._calc_label(path["mel"], 2)],
221
+ "thr_level_label": _NAMES_3[self._calc_label(path["mel"], 3)],
222
+ }