MuGeminorum commited on
Commit
7ea4927
1 Parent(s): 7911586
README.md CHANGED
@@ -14,12 +14,15 @@ size_categories:
14
  - 10K<n<100K
15
  viewer: false
16
  ---
 
17
  # Dataset Card for Music Genre
 
 
18
  ## Dataset Description
19
  - **Homepage:** <https://ccmusic-database.github.io>
20
  - **Repository:** <https://huggingface.co/datasets/ccmusic-database/music_genre>
21
  - **Paper:** <https://doi.org/10.5281/zenodo.5676893>
22
- - **Leaderboard:** <https://ccmusic-database.github.io/team.html>
23
  - **Point of Contact:** <https://huggingface.co/ccmusic-database/music_genre>
24
 
25
  ### Dataset Summary
@@ -34,23 +37,42 @@ Multilingual
34
  ## Maintenance
35
  ```bash
36
  GIT_LFS_SKIP_SMUDGE=1 git clone git@hf.co:datasets/ccmusic-database/music_genre
 
37
  ```
38
 
39
  ## Usage
40
- When doing classification task, only one colum of fst_level_label, sec_level_label and thr_level_label can be used, not for mixing.
41
  ```python
42
  from datasets import load_dataset
43
 
44
- dataset = load_dataset("ccmusic-database/music_genre")
 
 
45
 
46
- for item in dataset["train"]:
47
  print(item)
48
 
49
- for item in dataset["test"]:
50
  print(item)
51
  ```
52
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
53
  ## Dataset Structure
 
54
  <style>
55
  #genres td {
56
  vertical-align: middle !important;
@@ -62,17 +84,17 @@ for item in dataset["test"]:
62
  </style>
63
  <table id="genres">
64
  <tr>
65
- <td>mel(.jpg, 11.4s)</td>
66
- <td>cqt(.jpg, 11.4s)</td>
67
- <td>chroma(.jpg, 11.4s)</td>
68
  <td>fst_level_label(2-class)</td>
69
  <td>sec_level_label(9-class)</td>
70
  <td>thr_level_label(16-class)</td>
71
  </tr>
72
  <tr>
73
- <td><img src="https://cdn-uploads.huggingface.co/production/uploads/655e0a5b8c2d4379a71882a9/PqdpQP__ls-xo6lz93Q4y.jpeg"></td>
74
- <td><img src="https://cdn-uploads.huggingface.co/production/uploads/655e0a5b8c2d4379a71882a9/EZfYLng40hh_FUudB9vvx.jpeg"></td>
75
- <td><img src="https://cdn-uploads.huggingface.co/production/uploads/655e0a5b8c2d4379a71882a9/zviZ-rEKAvBCVFvKFml4R.jpeg"></td>
76
  <td>1_Classic / 2_Non_classic</td>
77
  <td>3_Symphony / 4_Opera / 5_Solo / 6_Chamber / 7_Pop / 8_Dance_and_house / 9_Indie / 10_Soul_or_r_and_b / 11_Rock</td>
78
  <td>3_Symphony / 4_Opera / 5_Solo / 6_Chamber / 12_Pop_vocal_ballad / 13_Adult_contemporary / 14_Teen_pop / 15_Contemporary_dance_pop / 16_Dance_pop / 17_Classic_indie_pop / 18_Chamber_cabaret_and_art_pop / 10_Soul_or_r_and_b / 19_Adult_alternative_rock / 20_Uplifting_anthemic_rock / 21_Soft_rock / 22_Acoustic_pop</td>
@@ -87,6 +109,31 @@ for item in dataset["test"]:
87
  </tr>
88
  </table>
89
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
90
  ### Data Instances
91
  .zip(.jpg)
92
 
@@ -122,11 +169,12 @@ for item in dataset["test"]:
122
  ```
123
 
124
  ### Data Splits
125
- | total | 36375 |
126
- | :-------------: | :---: |
127
- | train(80%) | 29100 |
128
- | validation(10%) | 3637 |
129
- | test(10%) | 3638 |
 
130
 
131
  ## Dataset Creation
132
  ### Curation Rationale
@@ -137,7 +185,7 @@ Promoting the development of AI in the music industry
137
  Zhaorui Liu, Monan Zhou
138
 
139
  #### Who are the source language producers?
140
- Composers of the songs in dataset
141
 
142
  ### Annotations
143
  #### Annotation process
@@ -192,16 +240,15 @@ SOFTWARE.
192
  ```
193
 
194
  ### Citation Information
195
- ```
196
  @dataset{zhaorui_liu_2021_5676893,
197
- author = {Zhaorui Liu, Monan Zhou, Shenyang Xu, Yuan Wang, Zhaowen Wang, Wei Li and Zijin Li},
198
- title = {CCMUSIC DATABASE: A Music Data Sharing Platform for Computational Musicology Research},
199
- month = {nov},
200
- year = {2021},
201
- publisher = {Zenodo},
202
- version = {1.1},
203
- doi = {10.5281/zenodo.5676893},
204
- url = {https://doi.org/10.5281/zenodo.5676893}
205
  }
206
  ```
207
 
 
14
  - 10K<n<100K
15
  viewer: false
16
  ---
17
+
18
  # Dataset Card for Music Genre
19
+ The raw dataset comprises approximately 1,700 musical pieces in .mp3 format, sourced from the NetEase music. The lengths of these pieces range from 270 to 300 seconds. All are sampled at the rate of 22,050 Hz. As the website providing the audio music includes style labels for the downloaded music, there are no specific annotators involved. Validation is achieved concurrently with the downloading process. They are categorized into a total of 16 genres.
20
+
21
  ## Dataset Description
22
  - **Homepage:** <https://ccmusic-database.github.io>
23
  - **Repository:** <https://huggingface.co/datasets/ccmusic-database/music_genre>
24
  - **Paper:** <https://doi.org/10.5281/zenodo.5676893>
25
+ - **Leaderboard:** <https://www.modelscope.cn/datasets/ccmusic/music_genre>
26
  - **Point of Contact:** <https://huggingface.co/ccmusic-database/music_genre>
27
 
28
  ### Dataset Summary
 
37
  ## Maintenance
38
  ```bash
39
  GIT_LFS_SKIP_SMUDGE=1 git clone git@hf.co:datasets/ccmusic-database/music_genre
40
+ cd music_genre
41
  ```
42
 
43
  ## Usage
44
+ ### Eval Subset
45
  ```python
46
  from datasets import load_dataset
47
 
48
+ dataset = load_dataset("ccmusic-database/music_genre", name="eval")
49
+ for item in ds["train"]:
50
+ print(item)
51
 
52
+ for item in ds["validation"]:
53
  print(item)
54
 
55
+ for item in ds["test"]:
56
  print(item)
57
  ```
58
 
59
+ ### Raw Subset
60
+ ```python
61
+ from datasets import load_dataset
62
+
63
+ dataset = load_dataset("ccmusic-database/music_genre", name="default")
64
+ for item in ds["train"]:
65
+ print(item)
66
+
67
+ for item in ds["validation"]:
68
+ print(item)
69
+
70
+ for item in ds["test"]:
71
+ print(item)
72
+ ```
73
+
74
  ## Dataset Structure
75
+ ### Eval Subset
76
  <style>
77
  #genres td {
78
  vertical-align: middle !important;
 
84
  </style>
85
  <table id="genres">
86
  <tr>
87
+ <td>mel(.jpg, 11.4s, 48000Hz)</td>
88
+ <td>cqt(.jpg, 11.4s, 48000Hz)</td>
89
+ <td>chroma(.jpg, 11.4s, 48000Hz)</td>
90
  <td>fst_level_label(2-class)</td>
91
  <td>sec_level_label(9-class)</td>
92
  <td>thr_level_label(16-class)</td>
93
  </tr>
94
  <tr>
95
+ <td><img src="./data/PqdpQP__ls-xo6lz93Q4y.jpeg"></td>
96
+ <td><img src="./data/EZfYLng40hh_FUudB9vvx.jpeg"></td>
97
+ <td><img src="./data/zviZ-rEKAvBCVFvKFml4R.jpeg"></td>
98
  <td>1_Classic / 2_Non_classic</td>
99
  <td>3_Symphony / 4_Opera / 5_Solo / 6_Chamber / 7_Pop / 8_Dance_and_house / 9_Indie / 10_Soul_or_r_and_b / 11_Rock</td>
100
  <td>3_Symphony / 4_Opera / 5_Solo / 6_Chamber / 12_Pop_vocal_ballad / 13_Adult_contemporary / 14_Teen_pop / 15_Contemporary_dance_pop / 16_Dance_pop / 17_Classic_indie_pop / 18_Chamber_cabaret_and_art_pop / 10_Soul_or_r_and_b / 19_Adult_alternative_rock / 20_Uplifting_anthemic_rock / 21_Soft_rock / 22_Acoustic_pop</td>
 
109
  </tr>
110
  </table>
111
 
112
+ ### Raw Subset
113
+ <table>
114
+ <tr>
115
+ <th>audio(.wav, 22050Hz)</th>
116
+ <th>mel(spectrogram, .jpg, 22050Hz)</th>
117
+ <th>fst_level_label(2-class)</th>
118
+ <th>sec_level_label(9-class)</th>
119
+ <th>thr_level_label(16-class)</th>
120
+ </tr>
121
+ <tr>
122
+ <td><audio controls src="./data/8bb58041d6b9d35db688bcedfde0fe39.mp3"></audio></td>
123
+ <td><img src="./data/8bb58041d6b9d35db688bcedfde0fe39.jpg"></td>
124
+ <td>1_Classic / 2_Non_classic</td>
125
+ <td>3_Symphony / 4_Opera / 5_Solo / 6_Chamber / 7_Pop / 8_Dance_and_house / 9_Indie / 10_Soul_or_r_and_b / 11_Rock</td>
126
+ <td>3_Symphony / 4_Opera / 5_Solo / 6_Chamber / 12_Pop_vocal_ballad / 13_Adult_contemporary / 14_Teen_pop / 15_Contemporary_dance_pop / 16_Dance_pop / 17_Classic_indie_pop / 18_Chamber_cabaret_and_art_pop / 10_Soul_or_r_and_b / 19_Adult_alternative_rock / 20_Uplifting_anthemic_rock / 21_Soft_rock / 22_Acoustic_pop</td>
127
+ </tr>
128
+ <tr>
129
+ <td>...</td>
130
+ <td>...</td>
131
+ <td>...</td>
132
+ <td>...</td>
133
+ <td>...</td>
134
+ </tr>
135
+ </table>
136
+
137
  ### Data Instances
138
  .zip(.jpg)
139
 
 
169
  ```
170
 
171
  ### Data Splits
172
+ | Split | Eval | Raw |
173
+ | :-------------: | :---: | :---: |
174
+ | total | 36375 | 1713 |
175
+ | train(80%) | 29100 | 1370 |
176
+ | validation(10%) | 3637 | 171 |
177
+ | test(10%) | 3638 | 172 |
178
 
179
  ## Dataset Creation
180
  ### Curation Rationale
 
185
  Zhaorui Liu, Monan Zhou
186
 
187
  #### Who are the source language producers?
188
+ Composers of the songs in the dataset
189
 
190
  ### Annotations
191
  #### Annotation process
 
240
  ```
241
 
242
  ### Citation Information
243
+ ```bibtex
244
  @dataset{zhaorui_liu_2021_5676893,
245
+ author = {Monan Zhou, Shenyang Xu, Zhaorui Liu, Zhaowen Wang, Feng Yu, Wei Li and Baoqiang Han},
246
+ title = {CCMusic: an Open and Diverse Database for Chinese and General Music Information Retrieval Research},
247
+ month = {mar},
248
+ year = {2024},
249
+ publisher = {HuggingFace},
250
+ version = {1.2},
251
+ url = {https://huggingface.co/ccmusic-database}
 
252
  }
253
  ```
254
 
data/{genre_data.zip → 8bb58041d6b9d35db688bcedfde0fe39.jpg} RENAMED
File without changes
data/8bb58041d6b9d35db688bcedfde0fe39.mp3 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d8406e26bbd885e74e0e6775d5150863d030a8fb9133063df503f4481dfa7687
3
+ size 4089932
data/EZfYLng40hh_FUudB9vvx.jpeg ADDED

Git LFS Details

  • SHA256: 86c83394c4fa63540addc30ace97883c3b64ad73c3a5b94cabff07d4bb8c7da0
  • Pointer size: 130 Bytes
  • Size of remote file: 50.7 kB
data/PqdpQP__ls-xo6lz93Q4y.jpeg ADDED

Git LFS Details

  • SHA256: 83a92d027b382e4842e3b09b7bd252058c158520da8dcaea5c98a894d748a43c
  • Pointer size: 130 Bytes
  • Size of remote file: 41 kB
data/zviZ-rEKAvBCVFvKFml4R.jpeg ADDED

Git LFS Details

  • SHA256: 94996084a4cafec0bea5b877814493f4b1b17a4e147d41c5231e59d90a92cbf8
  • Pointer size: 130 Bytes
  • Size of remote file: 34.5 kB
music_genre.py CHANGED
@@ -1,13 +1,10 @@
 
1
  import os
2
- import socket
3
  import random
4
  import datasets
5
  from datasets.tasks import ImageClassification
6
 
7
- _NAMES_1 = {
8
- 1: "Classic",
9
- 2: "Non_classic"
10
- }
11
 
12
  _NAMES_2 = {
13
  3: "Symphony",
@@ -18,7 +15,7 @@ _NAMES_2 = {
18
  8: "Dance_and_house",
19
  9: "Indie",
20
  10: "Soul_or_r_and_b",
21
- 11: "Rock"
22
  }
23
 
24
  _NAMES_3 = {
@@ -37,44 +34,93 @@ _NAMES_3 = {
37
  19: "Adult_alternative_rock",
38
  20: "Uplifting_anthemic_rock",
39
  21: "Soft_rock",
40
- 22: "Acoustic_pop"
41
  }
42
 
43
- _DBNAME = os.path.basename(__file__).split('.')[0]
 
 
44
 
45
- _HOMEPAGE = f"https://huggingface.co/datasets/ccmusic-database/{_DBNAME}"
46
 
47
  _CITATION = """\
48
  @dataset{zhaorui_liu_2021_5676893,
49
- author = {Zhaorui Liu, Monan Zhou, Shenyang Xu, Yuan Wang, Zhaowen Wang, Wei Li and Zijin Li},
50
- title = {CCMUSIC DATABASE: A Music Data Sharing Platform for Computational Musicology Research},
51
- month = {nov},
52
- year = {2021},
53
- publisher = {Zenodo},
54
- version = {1.1},
55
- doi = {10.5281/zenodo.5676893},
56
- url = {https://doi.org/10.5281/zenodo.5676893}
57
  }
58
  """
59
 
60
  _DESCRIPTION = """\
61
- This database contains about 1700 musical pieces (.mp3 format) with lengths of 270-300s that are divided into 17 genres in total.
 
 
62
  """
63
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
64
 
65
  class music_genre(datasets.GeneratorBasedBuilder):
66
- def _info(self):
67
- return datasets.DatasetInfo(
 
 
68
  features=datasets.Features(
69
  {
70
  "mel": datasets.Image(),
71
  "cqt": datasets.Image(),
72
  "chroma": datasets.Image(),
73
- "fst_level_label": datasets.features.ClassLabel(names=list(_NAMES_1.values())),
74
- "sec_level_label": datasets.features.ClassLabel(names=list(_NAMES_2.values())),
75
- "thr_level_label": datasets.features.ClassLabel(names=list(_NAMES_3.values()))
 
 
 
 
 
 
76
  }
77
  ),
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
78
  supervised_keys=("mel", "sec_level_label"),
79
  homepage=_HOMEPAGE,
80
  license="mit",
@@ -86,29 +132,58 @@ class music_genre(datasets.GeneratorBasedBuilder):
86
  image_column="mel",
87
  label_column="sec_level_label",
88
  )
89
- ]
90
  )
91
 
92
- def _cdn_url(self, ip='127.0.0.1', port=80):
93
- try:
94
- # easy for local test
95
- with socket.create_connection((ip, port), timeout=5):
96
- return f'http://{ip}/{_DBNAME}/data/genre_data.zip'
97
- except (socket.timeout, socket.error):
98
- return f"{_HOMEPAGE}/resolve/main/data/genre_data.zip"
 
 
 
 
 
 
 
 
99
 
100
  def _split_generators(self, dl_manager):
101
- data_files = dl_manager.download_and_extract(self._cdn_url())
102
- files = dl_manager.iter_files([data_files])
103
-
104
  dataset = []
105
- for path in files:
106
- if os.path.basename(path).endswith(".jpg") and 'mel' in path:
107
- dataset.append({
108
- 'mel': path,
109
- 'cqt': path.replace('\\mel\\', '\\cqt\\').replace('/mel/', '/cqt/'),
110
- 'chroma': path.replace('\\mel\\', '\\chroma\\').replace('/mel/', '/chroma/')
111
- })
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
112
 
113
  random.shuffle(dataset)
114
  data_count = len(dataset)
@@ -118,49 +193,54 @@ class music_genre(datasets.GeneratorBasedBuilder):
118
  return [
119
  datasets.SplitGenerator(
120
  name=datasets.Split.TRAIN,
121
- gen_kwargs={
122
- "files": dataset[:p80]
123
- },
124
  ),
125
  datasets.SplitGenerator(
126
  name=datasets.Split.VALIDATION,
127
- gen_kwargs={
128
- "files": dataset[p80:p90]
129
- },
130
  ),
131
  datasets.SplitGenerator(
132
  name=datasets.Split.TEST,
133
- gen_kwargs={
134
- "files": dataset[p90:]
135
- },
136
  ),
137
  ]
138
 
139
- def _calc_label(self, path, depth, substr='/mel/'):
140
  spect = substr
141
- dirpath = os.path.dirname(path)
142
  substr_index = dirpath.find(spect)
143
  if substr_index < 0:
144
- spect = spect.replace('/', '\\')
145
  substr_index = dirpath.find(spect)
146
 
147
- labstr = dirpath[substr_index + len(spect):]
148
- labs = labstr.split('/')
149
  if len(labs) < 2:
150
- labs = labstr.split('\\')
151
 
152
  if depth <= len(labs):
153
- return int(labs[depth - 1].split('_')[0])
154
  else:
155
- return int(labs[-1].split('_')[0])
156
 
157
  def _generate_examples(self, files):
158
- for i, path in enumerate(files):
159
- yield i, {
160
- "mel": path['mel'],
161
- "cqt": path['cqt'],
162
- "chroma": path['chroma'],
163
- "fst_level_label": _NAMES_1[self._calc_label(path['mel'], 1)],
164
- "sec_level_label": _NAMES_2[self._calc_label(path['mel'], 2)],
165
- "thr_level_label": _NAMES_3[self._calc_label(path['mel'], 3)]
166
- }
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # import hashlib
2
  import os
 
3
  import random
4
  import datasets
5
  from datasets.tasks import ImageClassification
6
 
7
+ _NAMES_1 = {1: "Classic", 2: "Non_classic"}
 
 
 
8
 
9
  _NAMES_2 = {
10
  3: "Symphony",
 
15
  8: "Dance_and_house",
16
  9: "Indie",
17
  10: "Soul_or_r_and_b",
18
+ 11: "Rock",
19
  }
20
 
21
  _NAMES_3 = {
 
34
  19: "Adult_alternative_rock",
35
  20: "Uplifting_anthemic_rock",
36
  21: "Soft_rock",
37
+ 22: "Acoustic_pop",
38
  }
39
 
40
+ _DBNAME = os.path.basename(__file__).split(".")[0]
41
+
42
+ _HOMEPAGE = f"https://www.modelscope.cn/datasets/ccmusic/{_DBNAME}"
43
 
44
+ _DOMAIN = f"https://www.modelscope.cn/api/v1/datasets/ccmusic/{_DBNAME}/repo?Revision=master&FilePath=data"
45
 
46
  _CITATION = """\
47
  @dataset{zhaorui_liu_2021_5676893,
48
+ author = {Monan Zhou, Shenyang Xu, Zhaorui Liu, Zhaowen Wang, Feng Yu, Wei Li and Zijin Li},
49
+ title = {CCMusic: an Open and Diverse Database for Chinese and General Music Information Retrieval Research},
50
+ month = {mar},
51
+ year = {2024},
52
+ publisher = {HuggingFace},
53
+ version = {1.2},
54
+ url = {https://huggingface.co/ccmusic-database}
 
55
  }
56
  """
57
 
58
  _DESCRIPTION = """\
59
+ The raw dataset comprises approximately 1,700 musical pieces in .mp3 format, sourced from the NetEase music. The lengths of these pieces range from 270 to 300 seconds. All are sampled at the rate of 48,000 Hz. As the website providing the audio music includes style labels for the downloaded music, there are no specific annotators involved. Validation is achieved concurrently with the downloading process. They are categorized into a total of 16 genres.
60
+
61
+ For the pre-processed version, audio is cut into an 11.4-second segment, resulting in 36,375 files, which are then transformed into Mel, CQT and Chroma. In the end, the data entry has six columns: the first three columns represent the Mel, CQT, and Chroma spectrogram slices in .jpg format, respectively, while the last three columns contain the labels for the three levels. The first level comprises two categories, the second level consists of nine categories, and the third level encompasses 16 categories. The entire dataset is shuffled and split into training, validation, and test sets in a ratio of 8:1:1. This dataset can be used for genre classification.
62
  """
63
 
64
+ _URLS = {
65
+ "audio": f"{_DOMAIN}/audio.zip",
66
+ "mel": f"{_DOMAIN}/mel.zip",
67
+ "eval": f"{_DOMAIN}/eval.zip",
68
+ }
69
+
70
+
71
+ class music_genre_Config(datasets.BuilderConfig):
72
+ def __init__(self, features, **kwargs):
73
+ super(music_genre_Config, self).__init__(
74
+ version=datasets.Version("1.2.0"), **kwargs
75
+ )
76
+ self.features = features
77
+
78
 
79
  class music_genre(datasets.GeneratorBasedBuilder):
80
+ VERSION = datasets.Version("1.2.0")
81
+ BUILDER_CONFIGS = [
82
+ music_genre_Config(
83
+ name="eval",
84
  features=datasets.Features(
85
  {
86
  "mel": datasets.Image(),
87
  "cqt": datasets.Image(),
88
  "chroma": datasets.Image(),
89
+ "fst_level_label": datasets.features.ClassLabel(
90
+ names=list(_NAMES_1.values())
91
+ ),
92
+ "sec_level_label": datasets.features.ClassLabel(
93
+ names=list(_NAMES_2.values())
94
+ ),
95
+ "thr_level_label": datasets.features.ClassLabel(
96
+ names=list(_NAMES_3.values())
97
+ ),
98
  }
99
  ),
100
+ ),
101
+ music_genre_Config(
102
+ name="default",
103
+ features=datasets.Features(
104
+ {
105
+ "audio": datasets.Audio(sampling_rate=22050),
106
+ "mel": datasets.Image(),
107
+ "fst_level_label": datasets.features.ClassLabel(
108
+ names=list(_NAMES_1.values())
109
+ ),
110
+ "sec_level_label": datasets.features.ClassLabel(
111
+ names=list(_NAMES_2.values())
112
+ ),
113
+ "thr_level_label": datasets.features.ClassLabel(
114
+ names=list(_NAMES_3.values())
115
+ ),
116
+ }
117
+ ),
118
+ ),
119
+ ]
120
+
121
+ def _info(self):
122
+ return datasets.DatasetInfo(
123
+ features=self.config.features,
124
  supervised_keys=("mel", "sec_level_label"),
125
  homepage=_HOMEPAGE,
126
  license="mit",
 
132
  image_column="mel",
133
  label_column="sec_level_label",
134
  )
135
+ ],
136
  )
137
 
138
+ # def _str2md5(self, original_string):
139
+ # """
140
+ # Calculate and return the MD5 hash of a given string.
141
+ # Parameters:
142
+ # original_string (str): The original string for which the MD5 hash is to be computed.
143
+ # Returns:
144
+ # str: The hexadecimal representation of the MD5 hash.
145
+ # """
146
+ # # Create an md5 object
147
+ # md5_obj = hashlib.md5()
148
+ # # Update the md5 object with the original string encoded as bytes
149
+ # md5_obj.update(original_string.encode("utf-8"))
150
+ # # Retrieve the hexadecimal representation of the MD5 hash
151
+ # md5_hash = md5_obj.hexdigest()
152
+ # return md5_hash
153
 
154
  def _split_generators(self, dl_manager):
 
 
 
155
  dataset = []
156
+ if self.config.name == "eval":
157
+ data_files = dl_manager.download_and_extract(_URLS["eval"])
158
+ for path in dl_manager.iter_files([data_files]):
159
+ if os.path.basename(path).endswith(".jpg") and "mel" in path:
160
+ dataset.append(
161
+ {
162
+ "mel": path,
163
+ "cqt": path.replace("\\mel\\", "\\cqt\\").replace(
164
+ "/mel/", "/cqt/"
165
+ ),
166
+ "chroma": path.replace("\\mel\\", "\\chroma\\").replace(
167
+ "/mel/", "/chroma/"
168
+ ),
169
+ }
170
+ )
171
+
172
+ else:
173
+ files = {}
174
+ audio_files = dl_manager.download_and_extract(_URLS["audio"])
175
+ mel_files = dl_manager.download_and_extract(_URLS["mel"])
176
+ for path in dl_manager.iter_files([audio_files]):
177
+ fname: str = os.path.basename(path)
178
+ if fname.endswith(".mp3"):
179
+ files[fname.split(".mp")[0]] = {"audio": path}
180
+
181
+ for path in dl_manager.iter_files([mel_files]):
182
+ fname: str = os.path.basename(path)
183
+ if fname.endswith(".jpg"):
184
+ files[fname.split(".jp")[0]]["mel"] = path
185
+
186
+ dataset = list(files.values())
187
 
188
  random.shuffle(dataset)
189
  data_count = len(dataset)
 
193
  return [
194
  datasets.SplitGenerator(
195
  name=datasets.Split.TRAIN,
196
+ gen_kwargs={"files": dataset[:p80]},
 
 
197
  ),
198
  datasets.SplitGenerator(
199
  name=datasets.Split.VALIDATION,
200
+ gen_kwargs={"files": dataset[p80:p90]},
 
 
201
  ),
202
  datasets.SplitGenerator(
203
  name=datasets.Split.TEST,
204
+ gen_kwargs={"files": dataset[p90:]},
 
 
205
  ),
206
  ]
207
 
208
+ def _calc_label(self, path, depth, substr="/mel/"):
209
  spect = substr
210
+ dirpath: str = os.path.dirname(path)
211
  substr_index = dirpath.find(spect)
212
  if substr_index < 0:
213
+ spect = spect.replace("/", "\\")
214
  substr_index = dirpath.find(spect)
215
 
216
+ labstr: str = dirpath[substr_index + len(spect) :]
217
+ labs = labstr.split("/")
218
  if len(labs) < 2:
219
+ labs = labstr.split("\\")
220
 
221
  if depth <= len(labs):
222
+ return int(labs[depth - 1].split("_")[0])
223
  else:
224
+ return int(labs[-1].split("_")[0])
225
 
226
  def _generate_examples(self, files):
227
+ if self.config.name == "eval":
228
+ for i, path in enumerate(files):
229
+ yield i, {
230
+ "mel": path["mel"],
231
+ "cqt": path["cqt"],
232
+ "chroma": path["chroma"],
233
+ "fst_level_label": _NAMES_1[self._calc_label(path["mel"], 1)],
234
+ "sec_level_label": _NAMES_2[self._calc_label(path["mel"], 2)],
235
+ "thr_level_label": _NAMES_3[self._calc_label(path["mel"], 3)],
236
+ }
237
+
238
+ else:
239
+ for i, path in enumerate(files):
240
+ yield i, {
241
+ "audio": path["audio"],
242
+ "mel": path["mel"],
243
+ "fst_level_label": _NAMES_1[self._calc_label(path["mel"], 1)],
244
+ "sec_level_label": _NAMES_2[self._calc_label(path["mel"], 2)],
245
+ "thr_level_label": _NAMES_3[self._calc_label(path["mel"], 3)],
246
+ }