MuGeminorum
commited on
Commit
•
00a729a
1
Parent(s):
24d4d37
sync ms
Browse files- README.md +37 -14
- data/{labels.zip → Pentatonix - Valentine.jpg} +2 -2
- data/{audios.zip → Pentatonix - Valentine.mp3} +2 -2
- song_structure.py +106 -46
README.md
CHANGED
@@ -10,17 +10,21 @@ tags:
|
|
10 |
pretty_name: Song Structure Annotation Database
|
11 |
size_categories:
|
12 |
- n<1K
|
|
|
13 |
---
|
14 |
|
|
|
|
|
|
|
15 |
## Dataset Description
|
16 |
- **Homepage:** <https://ccmusic-database.github.io>
|
17 |
- **Repository:** <https://huggingface.co/datasets/CCMUSIC/song_structure>
|
18 |
- **Paper:** <https://doi.org/10.5281/zenodo.5676893>
|
19 |
- **Leaderboard:** <https://ccmusic-database.github.io/team.html>
|
20 |
-
- **Point of Contact:**
|
21 |
|
22 |
### Dataset Summary
|
23 |
-
|
24 |
|
25 |
### Supported Tasks and Leaderboards
|
26 |
time-series-forecasting
|
@@ -28,7 +32,27 @@ time-series-forecasting
|
|
28 |
### Languages
|
29 |
Chinese, English
|
30 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
31 |
## Dataset Structure
|
|
|
|
|
|
|
|
|
|
|
32 |
### Data Instances
|
33 |
.wav, .txt
|
34 |
|
@@ -59,11 +83,11 @@ Students from CCMUSIC collected 300 pop songs, as well as a structure annotation
|
|
59 |
Students from CCMUSIC
|
60 |
|
61 |
### Personal and Sensitive Information
|
62 |
-
Due to copyright issues with the original music, only features of
|
63 |
|
64 |
## Considerations for Using the Data
|
65 |
### Social Impact of Dataset
|
66 |
-
Promoting the development of AI music industry
|
67 |
|
68 |
### Discussion of Biases
|
69 |
Only for mp3
|
@@ -79,7 +103,7 @@ Zijin Li
|
|
79 |
```
|
80 |
MIT License
|
81 |
|
82 |
-
Copyright (c)
|
83 |
|
84 |
Permission is hereby granted, free of charge, to any person obtaining a copy
|
85 |
of this software and associated documentation files (the "Software"), to deal
|
@@ -101,16 +125,15 @@ SOFTWARE.
|
|
101 |
```
|
102 |
|
103 |
### Citation Information
|
104 |
-
```
|
105 |
@dataset{zhaorui_liu_2021_5676893,
|
106 |
-
author = {Zhaorui Liu,
|
107 |
-
title = {
|
108 |
-
month =
|
109 |
-
year =
|
110 |
-
publisher = {
|
111 |
-
version = {1.
|
112 |
-
|
113 |
-
url = {https://doi.org/10.5281/zenodo.5676893}
|
114 |
}
|
115 |
```
|
116 |
|
|
|
10 |
pretty_name: Song Structure Annotation Database
|
11 |
size_categories:
|
12 |
- n<1K
|
13 |
+
viewer: false
|
14 |
---
|
15 |
|
16 |
+
# Dataset Card for Song Structure
|
17 |
+
The raw dataset comprises 300 pop songs in .mp3 format, sourced from the NetEase music, accompanied by a structure annotation file for each song in .txt format. The annotator for music structure is a professional musician and teacher from the China Conservatory of Music. For the statistics of the dataset, there are 208 Chinese songs, 87 English songs, three Korean songs and two Japanese songs. The song structures are labeled as follows: intro, re-intro, verse, chorus, pre-chorus, post-chorus, bridge, interlude and ending. Fig. 7 shows the frequency of each segment label that appears in the set. The labels chorus and verse are the two most prevalent segment labels in the dataset and they are the most common segment in Western popular music. Among them, the number of “Postchorus” tags is the least, with only two present.
|
18 |
+
|
19 |
## Dataset Description
|
20 |
- **Homepage:** <https://ccmusic-database.github.io>
|
21 |
- **Repository:** <https://huggingface.co/datasets/CCMUSIC/song_structure>
|
22 |
- **Paper:** <https://doi.org/10.5281/zenodo.5676893>
|
23 |
- **Leaderboard:** <https://ccmusic-database.github.io/team.html>
|
24 |
+
- **Point of Contact:** <https://www.modelscope.cn/datasets/ccmusic/song_structure>
|
25 |
|
26 |
### Dataset Summary
|
27 |
+
Unlike the above three datasets for classification, this one has not undergone pre-processing such as spectrogram transform. Thus we provide the original content only. The integrated version of the dataset is organized based on audio files, with each item structured into three columns: The first column contains the audio of the song in .mp3 format, sampled at 22,050 Hz. The second column consists of lists indicating the time points that mark the boundaries of different song sections, while the third column contains lists corresponding to the labels of the song structures listed in the second column. Strictly speaking, the first column represents the data, while the subsequent two columns represent the label.
|
28 |
|
29 |
### Supported Tasks and Leaderboards
|
30 |
time-series-forecasting
|
|
|
32 |
### Languages
|
33 |
Chinese, English
|
34 |
|
35 |
+
## Usage
|
36 |
+
```python
|
37 |
+
from datasets import load_dataset
|
38 |
+
|
39 |
+
dataset = load_dataset("ccmusic-database/song_structure")
|
40 |
+
for item in ds["train"]:
|
41 |
+
print(item)
|
42 |
+
|
43 |
+
for item in ds["validation"]:
|
44 |
+
print(item)
|
45 |
+
|
46 |
+
for item in ds["test"]:
|
47 |
+
print(item)
|
48 |
+
```
|
49 |
+
|
50 |
## Dataset Structure
|
51 |
+
| audio | mel | label |
|
52 |
+
| :------------------------------------------------------: | :-------------------------------------------: | :-----------------------------------------------------: |
|
53 |
+
| <audio controls src="./data/Pentatonix - Valentine.mp3"> | <img src="./data/Pentatonix - Valentine.jpg"> | {onset_time:uint32,offset_time:uint32,structure:string} |
|
54 |
+
| ... | ... | ... |
|
55 |
+
|
56 |
### Data Instances
|
57 |
.wav, .txt
|
58 |
|
|
|
83 |
Students from CCMUSIC
|
84 |
|
85 |
### Personal and Sensitive Information
|
86 |
+
Due to copyright issues with the original music, only features of audio by frame are provided in the dataset
|
87 |
|
88 |
## Considerations for Using the Data
|
89 |
### Social Impact of Dataset
|
90 |
+
Promoting the development of the AI music industry
|
91 |
|
92 |
### Discussion of Biases
|
93 |
Only for mp3
|
|
|
103 |
```
|
104 |
MIT License
|
105 |
|
106 |
+
Copyright (c) CCMUSIC
|
107 |
|
108 |
Permission is hereby granted, free of charge, to any person obtaining a copy
|
109 |
of this software and associated documentation files (the "Software"), to deal
|
|
|
125 |
```
|
126 |
|
127 |
### Citation Information
|
128 |
+
```bibtex
|
129 |
@dataset{zhaorui_liu_2021_5676893,
|
130 |
+
author = {Monan Zhou, Shenyang Xu, Zhaorui Liu, Zhaowen Wang, Feng Yu, Wei Li and Baoqiang Han},
|
131 |
+
title = {CCMusic: an Open and Diverse Database for Chinese and General Music Information Retrieval Research},
|
132 |
+
month = {mar},
|
133 |
+
year = {2024},
|
134 |
+
publisher = {HuggingFace},
|
135 |
+
version = {1.2},
|
136 |
+
url = {https://huggingface.co/ccmusic-database}
|
|
|
137 |
}
|
138 |
```
|
139 |
|
data/{labels.zip → Pentatonix - Valentine.jpg}
RENAMED
File without changes
|
data/{audios.zip → Pentatonix - Valentine.mp3}
RENAMED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:01b901b1f4e22eb6da5af1c442c4ea060ce954a8ce8465dd7583aef52035b131
|
3 |
+
size 2514485
|
song_structure.py
CHANGED
@@ -1,81 +1,141 @@
|
|
1 |
import os
|
|
|
|
|
|
|
2 |
import datasets
|
3 |
-
from datasets.tasks import AudioClassification
|
4 |
|
|
|
5 |
|
6 |
-
|
7 |
-
_NAMES = ["intro", "chorus", "verse", "pre-chorus", "post-chorus", "bridge"]
|
8 |
|
9 |
-
|
10 |
-
|
11 |
-
_HOMEPAGE = "https://huggingface.co/datasets/ccmusic-database/" + _DBNAME
|
12 |
|
13 |
_CITATION = """\
|
14 |
@dataset{zhaorui_liu_2021_5676893,
|
15 |
-
author = {Zhaorui Liu,
|
16 |
-
title = {
|
17 |
-
month =
|
18 |
-
year =
|
19 |
-
publisher = {
|
20 |
-
version = {1.
|
21 |
-
|
22 |
-
url = {https://doi.org/10.5281/zenodo.5676893}
|
23 |
}
|
24 |
"""
|
25 |
|
26 |
_DESCRIPTION = """\
|
27 |
-
|
28 |
-
as well as a structure annotation file (.txt format) for each song.
|
29 |
-
The song structure is labeled as follows:
|
30 |
-
intro, chorus, verse, pre-chorus, post-chorus, bridge, ending.
|
31 |
-
"""
|
32 |
|
33 |
-
|
|
|
34 |
|
|
|
|
|
|
|
|
|
|
|
35 |
|
36 |
-
class piano_sound_quality(datasets.GeneratorBasedBuilder):
|
37 |
|
|
|
38 |
def _info(self):
|
39 |
return datasets.DatasetInfo(
|
40 |
-
description=_DESCRIPTION,
|
41 |
features=datasets.Features(
|
42 |
{
|
43 |
-
"
|
44 |
-
"
|
45 |
-
"label": datasets.
|
|
|
|
|
|
|
|
|
|
|
|
|
46 |
}
|
47 |
),
|
48 |
-
supervised_keys=("
|
49 |
homepage=_HOMEPAGE,
|
50 |
license="mit",
|
51 |
citation=_CITATION,
|
52 |
-
|
53 |
-
AudioClassification(
|
54 |
-
task="audio-classification",
|
55 |
-
audio_column="time",
|
56 |
-
label_column="label",
|
57 |
-
)
|
58 |
-
],
|
59 |
)
|
60 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
61 |
def _split_generators(self, dl_manager):
|
62 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
63 |
|
64 |
return [
|
65 |
datasets.SplitGenerator(
|
66 |
-
name=datasets.Split.TRAIN,
|
67 |
-
|
68 |
-
|
69 |
-
}
|
70 |
-
)
|
|
|
|
|
|
|
71 |
]
|
72 |
|
73 |
def _generate_examples(self, files):
|
74 |
for i, path in enumerate(files):
|
75 |
-
|
76 |
-
|
77 |
-
|
78 |
-
|
79 |
-
|
80 |
-
"label": 0,
|
81 |
-
}
|
|
|
1 |
import os
|
2 |
+
import csv
|
3 |
+
import random
|
4 |
+
import hashlib
|
5 |
import datasets
|
|
|
6 |
|
7 |
+
_DBNAME = os.path.basename(__file__).split(".")[0]
|
8 |
|
9 |
+
_HOMEPAGE = f"https://www.modelscope.cn/datasets/ccmusic/{_DBNAME}"
|
|
|
10 |
|
11 |
+
_DOMAIN = f"https://www.modelscope.cn/api/v1/datasets/ccmusic/{_DBNAME}/repo?Revision=master&FilePath=data"
|
|
|
|
|
12 |
|
13 |
_CITATION = """\
|
14 |
@dataset{zhaorui_liu_2021_5676893,
|
15 |
+
author = {Monan Zhou, Shenyang Xu, Zhaorui Liu, Zhaowen Wang, Feng Yu, Wei Li and Baoqiang Han},
|
16 |
+
title = {CCMusic: an Open and Diverse Database for Chinese and General Music Information Retrieval Research},
|
17 |
+
month = {mar},
|
18 |
+
year = {2024},
|
19 |
+
publisher = {HuggingFace},
|
20 |
+
version = {1.2},
|
21 |
+
url = {https://huggingface.co/ccmusic-database}
|
|
|
22 |
}
|
23 |
"""
|
24 |
|
25 |
_DESCRIPTION = """\
|
26 |
+
The raw dataset comprises 300 pop songs in .mp3 format, sourced from the NetEase music, accompanied by a structure annotation file for each song in .txt format. The annotator for music structure is a professional musician and teacher from the China Conservatory of Music. For the statistics of the dataset, there are 208 Chinese songs, 87 English songs, three Korean songs and two Japanese songs. The song structures are labeled as follows: intro, re-intro, verse, chorus, pre-chorus, post-chorus, bridge, interlude and ending. Fig. 7 shows the frequency of each segment label that appears in the set. The labels chorus and verse are the two most prevalent segment labels in the dataset and they are the most common segment in Western popular music. Among them, the number of “Postchorus” tags is the least, with only two present.
|
|
|
|
|
|
|
|
|
27 |
|
28 |
+
Unlike the above three datasets for classification, this one has not undergone pre-processing such as spectrogram transform. Thus we provide the original content only. The integrated version of the dataset is organized based on audio files, with each item structured into three columns: The first column contains the audio of the song in .mp3 format, sampled at 44,100 Hz. The second column consists of lists indicating the time points that mark the boundaries of different song sections, while the third column contains lists corresponding to the labels of the song structures listed in the second column. Strictly speaking, the first column represents the data, while the subsequent two columns represent the label.
|
29 |
+
"""
|
30 |
|
31 |
+
_URLS = {
|
32 |
+
"audio": f"{_DOMAIN}/audio.zip",
|
33 |
+
"mel": f"{_DOMAIN}/mel.zip",
|
34 |
+
"label": f"{_DOMAIN}/label.zip",
|
35 |
+
}
|
36 |
|
|
|
37 |
|
38 |
+
class song_structure(datasets.GeneratorBasedBuilder):
|
39 |
def _info(self):
|
40 |
return datasets.DatasetInfo(
|
|
|
41 |
features=datasets.Features(
|
42 |
{
|
43 |
+
"audio": datasets.Audio(sampling_rate=22050),
|
44 |
+
"mel": datasets.Image(),
|
45 |
+
"label": datasets.Sequence(
|
46 |
+
feature={
|
47 |
+
"onset_time": datasets.Value("uint32"),
|
48 |
+
"offset_time": datasets.Value("uint32"),
|
49 |
+
"structure": datasets.Value("string"),
|
50 |
+
}
|
51 |
+
),
|
52 |
}
|
53 |
),
|
54 |
+
supervised_keys=("audio", "label"),
|
55 |
homepage=_HOMEPAGE,
|
56 |
license="mit",
|
57 |
citation=_CITATION,
|
58 |
+
description=_DESCRIPTION,
|
|
|
|
|
|
|
|
|
|
|
|
|
59 |
)
|
60 |
|
61 |
+
def _parse_txt_label(self, txt_file):
|
62 |
+
label = []
|
63 |
+
with open(txt_file, mode="r", encoding="utf-8") as file:
|
64 |
+
reader = csv.reader(file, delimiter="\t")
|
65 |
+
for row in reader:
|
66 |
+
if len(row) == 3:
|
67 |
+
label.append(
|
68 |
+
{
|
69 |
+
"onset_time": int(row[0]),
|
70 |
+
"offset_time": int(row[1]),
|
71 |
+
"structure": str(row[2]),
|
72 |
+
}
|
73 |
+
)
|
74 |
+
|
75 |
+
return label
|
76 |
+
|
77 |
+
def _str2md5(self, original_string):
|
78 |
+
"""
|
79 |
+
Calculate and return the MD5 hash of a given string.
|
80 |
+
Parameters:
|
81 |
+
original_string (str): The original string for which the MD5 hash is to be computed.
|
82 |
+
Returns:
|
83 |
+
str: The hexadecimal representation of the MD5 hash.
|
84 |
+
"""
|
85 |
+
# Create an md5 object
|
86 |
+
md5_obj = hashlib.md5()
|
87 |
+
# Update the md5 object with the original string encoded as bytes
|
88 |
+
md5_obj.update(original_string.encode("utf-8"))
|
89 |
+
# Retrieve the hexadecimal representation of the MD5 hash
|
90 |
+
md5_hash = md5_obj.hexdigest()
|
91 |
+
return md5_hash
|
92 |
+
|
93 |
def _split_generators(self, dl_manager):
|
94 |
+
audio_files = dl_manager.download_and_extract(_URLS["audio"])
|
95 |
+
mel_files = dl_manager.download_and_extract(_URLS["mel"])
|
96 |
+
txt_files = dl_manager.download_and_extract(_URLS["label"])
|
97 |
+
files = {}
|
98 |
+
|
99 |
+
for path in dl_manager.iter_files([audio_files]):
|
100 |
+
fname: str = os.path.basename(path)
|
101 |
+
if fname.endswith(".mp3"):
|
102 |
+
item_id = self._str2md5(fname.split(".mp")[0])
|
103 |
+
files[item_id] = {"audio": path}
|
104 |
+
|
105 |
+
for path in dl_manager.iter_files([mel_files]):
|
106 |
+
fname: str = os.path.basename(path)
|
107 |
+
if fname.endswith(".jpg"):
|
108 |
+
item_id = self._str2md5(fname.split(".jp")[0])
|
109 |
+
files[item_id]["mel"] = path
|
110 |
+
|
111 |
+
for path in dl_manager.iter_files([txt_files]):
|
112 |
+
fname: str = os.path.basename(path)
|
113 |
+
if fname.endswith(".txt"):
|
114 |
+
item_id = self._str2md5(fname.split(".tx")[0])
|
115 |
+
files[item_id]["label"] = self._parse_txt_label(path)
|
116 |
+
|
117 |
+
dataset = list(files.values())
|
118 |
+
random.shuffle(dataset)
|
119 |
+
data_count = len(dataset)
|
120 |
+
p80 = int(data_count * 0.8)
|
121 |
+
p90 = int(data_count * 0.9)
|
122 |
|
123 |
return [
|
124 |
datasets.SplitGenerator(
|
125 |
+
name=datasets.Split.TRAIN, gen_kwargs={"files": dataset[:p80]}
|
126 |
+
),
|
127 |
+
datasets.SplitGenerator(
|
128 |
+
name=datasets.Split.VALIDATION, gen_kwargs={"files": dataset[p80:p90]}
|
129 |
+
),
|
130 |
+
datasets.SplitGenerator(
|
131 |
+
name=datasets.Split.TEST, gen_kwargs={"files": dataset[p90:]}
|
132 |
+
),
|
133 |
]
|
134 |
|
135 |
def _generate_examples(self, files):
|
136 |
for i, path in enumerate(files):
|
137 |
+
yield i, {
|
138 |
+
"audio": path["audio"],
|
139 |
+
"mel": path["mel"],
|
140 |
+
"label": path["label"],
|
141 |
+
}
|
|
|
|