Datasets:
MuGeminorum
commited on
Commit
•
d94d1f8
1
Parent(s):
abc636d
sync ms
Browse files- README.md +74 -29
- data/{pianos_data.zip → 3800.jpg} +2 -2
- data/{pianos_rawdata.zip → 3800.wav} +2 -2
- data/TYYnuJqndeWzXLJMmOyXJ.jpeg +3 -0
- pianos.py +108 -43
README.md
CHANGED
@@ -15,32 +15,54 @@ viewer: false
|
|
15 |
---
|
16 |
|
17 |
# Dataset Card for Piano Sound Quality Dataset
|
|
|
|
|
18 |
## Usage
|
|
|
19 |
```python
|
20 |
from datasets import load_dataset
|
21 |
|
22 |
-
|
23 |
-
|
|
|
|
|
|
|
|
|
24 |
|
25 |
-
for item in
|
26 |
-
print(
|
27 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
28 |
```
|
29 |
|
30 |
## Maintenance
|
31 |
```bash
|
32 |
GIT_LFS_SKIP_SMUDGE=1 git clone git@hf.co:datasets/ccmusic-database/pianos
|
|
|
33 |
```
|
34 |
|
35 |
## Dataset Description
|
36 |
- **Homepage:** <https://ccmusic-database.github.io>
|
37 |
- **Repository:** <https://huggingface.co/datasets/CCMUSIC/pianos>
|
38 |
- **Paper:** <https://doi.org/10.5281/zenodo.5676893>
|
39 |
-
- **Leaderboard:** <https://
|
40 |
- **Point of Contact:** <https://arxiv.org/abs/2310.04722>
|
41 |
|
42 |
### Dataset Summary
|
43 |
-
|
44 |
|
45 |
### Supported Tasks and Leaderboards
|
46 |
Piano Sound Classification, pitch detection
|
@@ -49,23 +71,45 @@ Piano Sound Classification, pitch detection
|
|
49 |
English
|
50 |
|
51 |
## Dataset Structure
|
|
|
52 |
<style>
|
53 |
-
|
54 |
vertical-align: middle !important;
|
55 |
text-align: center;
|
56 |
}
|
57 |
-
|
58 |
text-align: center;
|
59 |
}
|
60 |
</style>
|
61 |
-
<table
|
62 |
<tr>
|
63 |
<th>mel(.jpg, 0.18s)</th>
|
64 |
<th>label(8-class)</th>
|
65 |
<th>pitch(88-class)</th>
|
66 |
</tr>
|
67 |
<tr>
|
68 |
-
<td><img src="
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
69 |
<td>PearlRiver / YoungChang / Steinway-T / Hsinghai / Kawai / Steinway / Kawai-G / Yamaha</td>
|
70 |
<td>88 pitches on piano</td>
|
71 |
</tr>
|
@@ -73,6 +117,7 @@ English
|
|
73 |
<td>...</td>
|
74 |
<td>...</td>
|
75 |
<td>...</td>
|
|
|
76 |
</tr>
|
77 |
</table>
|
78 |
|
@@ -91,12 +136,13 @@ English
|
|
91 |
8_Yamaha
|
92 |
```
|
93 |
|
94 |
-
### Data Splits
|
95 |
-
|
|
96 |
-
| :-------------: | :---: |
|
97 |
-
|
|
98 |
-
|
|
99 |
-
|
|
|
|
100 |
|
101 |
## Dataset Creation
|
102 |
### Curation Rationale
|
@@ -120,7 +166,7 @@ Piano brands
|
|
120 |
|
121 |
## Considerations for Using the Data
|
122 |
### Social Impact of Dataset
|
123 |
-
Help
|
124 |
|
125 |
### Discussion of Biases
|
126 |
Only for pianos
|
@@ -133,9 +179,9 @@ Lack of black keys for Steinway, data imbalance
|
|
133 |
Zijin Li
|
134 |
|
135 |
### Evaluation
|
136 |
-
[Monan Zhou, Shangda Wu, Shaohua Ji, Zijin Li, and Wei Li. A Holistic Evaluation of Piano Sound Quality[C]//Proceedings of the 10th Conference on Sound and Music Technology (CSMT). Springer, Singapore, 2023.](https://arxiv.org/pdf/2310.04722.pdf)
|
137 |
|
138 |
-
(Note: this paper only uses the first 7 piano classes in the dataset
|
139 |
|
140 |
### Licensing Information
|
141 |
```
|
@@ -163,16 +209,15 @@ SOFTWARE.
|
|
163 |
```
|
164 |
|
165 |
### Citation Information
|
166 |
-
```
|
167 |
@dataset{zhaorui_liu_2021_5676893,
|
168 |
-
author = {
|
169 |
-
title = {
|
170 |
-
month = {
|
171 |
-
year = {
|
172 |
-
publisher = {
|
173 |
-
version = {1.
|
174 |
-
|
175 |
-
url = {https://doi.org/10.5281/zenodo.5676893}
|
176 |
}
|
177 |
```
|
178 |
|
|
|
15 |
---
|
16 |
|
17 |
# Dataset Card for Piano Sound Quality Dataset
|
18 |
+
The raw dataset comprises 12 full-range audio files in .wav/.mp3/.m4a format representing seven models of pianos: Kawai upright piano, Kawai grand piano, Young Change upright piano, Hsinghai upright piano, Grand Theatre Steinway piano, Steinway grand piano, and Pearl River upright piano. Additionally, there are 1,320 split monophonic audio files in .wav/.mp3/.m4a format, bringing the total number of files to 1,332. Furthermore, the dataset includes a score sheet in .xls format containing subjective evaluations of piano sound quality provided by 29 participants with musical backgrounds.
|
19 |
+
|
20 |
## Usage
|
21 |
+
### Raw Subset
|
22 |
```python
|
23 |
from datasets import load_dataset
|
24 |
|
25 |
+
ds = load_dataset("ccmusic-database/pianos", name="default")
|
26 |
+
for item in ds["train"]:
|
27 |
+
print(item)
|
28 |
+
|
29 |
+
for item in ds["validation"]:
|
30 |
+
print(item)
|
31 |
|
32 |
+
for item in ds["test"]:
|
33 |
+
print(item)
|
34 |
+
```
|
35 |
+
|
36 |
+
### Eval Subset
|
37 |
+
```python
|
38 |
+
from datasets import load_dataset
|
39 |
+
|
40 |
+
ds = load_dataset("ccmusic-database/pianos", name="eval")
|
41 |
+
for item in ds["train"]:
|
42 |
+
print(item)
|
43 |
+
|
44 |
+
for item in ds["validation"]:
|
45 |
+
print(item)
|
46 |
+
|
47 |
+
for item in ds["test"]:
|
48 |
+
print(item)
|
49 |
```
|
50 |
|
51 |
## Maintenance
|
52 |
```bash
|
53 |
GIT_LFS_SKIP_SMUDGE=1 git clone git@hf.co:datasets/ccmusic-database/pianos
|
54 |
+
cd pianos
|
55 |
```
|
56 |
|
57 |
## Dataset Description
|
58 |
- **Homepage:** <https://ccmusic-database.github.io>
|
59 |
- **Repository:** <https://huggingface.co/datasets/CCMUSIC/pianos>
|
60 |
- **Paper:** <https://doi.org/10.5281/zenodo.5676893>
|
61 |
+
- **Leaderboard:** <https://www.modelscope.cn/datasets/ccmusic/pianos>
|
62 |
- **Point of Contact:** <https://arxiv.org/abs/2310.04722>
|
63 |
|
64 |
### Dataset Summary
|
65 |
+
Due to the need to increase the dataset size and the absence of a popular piano brand, Yamaha, the dataset is expanded by recording an upright Yamaha piano in [[1]](https://arxiv.org/pdf/2310.04722.pdf), in which the recording details can be found. This results in a total of 2,020 audio files. As models used in that article require a larger dataset, data augmentation was performed. The original audio was transformed into Mel spectrograms and sliced into 0.18-second segments, a parameter chosen based on empirical experience. This results in 18,745 spectrogram slices. Although 0.18 seconds may seem narrow, this duration is sufficient for the task at hand, as the classification of piano sound quality does not heavily rely on the temporal characteristics of the audio segments.
|
66 |
|
67 |
### Supported Tasks and Leaderboards
|
68 |
Piano Sound Classification, pitch detection
|
|
|
71 |
English
|
72 |
|
73 |
## Dataset Structure
|
74 |
+
### Eval Subset
|
75 |
<style>
|
76 |
+
.pianos td {
|
77 |
vertical-align: middle !important;
|
78 |
text-align: center;
|
79 |
}
|
80 |
+
.pianos th {
|
81 |
text-align: center;
|
82 |
}
|
83 |
</style>
|
84 |
+
<table class="pianos">
|
85 |
<tr>
|
86 |
<th>mel(.jpg, 0.18s)</th>
|
87 |
<th>label(8-class)</th>
|
88 |
<th>pitch(88-class)</th>
|
89 |
</tr>
|
90 |
<tr>
|
91 |
+
<td><img src="./data/TYYnuJqndeWzXLJMmOyXJ.jpeg"></td>
|
92 |
+
<td>PearlRiver / YoungChang / Steinway-T / Hsinghai / Kawai / Steinway / Kawai-G / Yamaha</td>
|
93 |
+
<td>88 pitches on piano</td>
|
94 |
+
</tr>
|
95 |
+
<tr>
|
96 |
+
<td>...</td>
|
97 |
+
<td>...</td>
|
98 |
+
<td>...</td>
|
99 |
+
</tr>
|
100 |
+
</table>
|
101 |
+
|
102 |
+
### Raw Subset
|
103 |
+
<table class="pianos">
|
104 |
+
<tr>
|
105 |
+
<th>audio(.wav, 22050Hz)</th>
|
106 |
+
<th>mel(.jpg)</th>
|
107 |
+
<th>label(8-class)</th>
|
108 |
+
<th>pitch(88-class)</th>
|
109 |
+
</tr>
|
110 |
+
<tr>
|
111 |
+
<td><audio controls src="./data/3800.wav"></audio></td>
|
112 |
+
<td><img src="./data/3800.jpg"></td>
|
113 |
<td>PearlRiver / YoungChang / Steinway-T / Hsinghai / Kawai / Steinway / Kawai-G / Yamaha</td>
|
114 |
<td>88 pitches on piano</td>
|
115 |
</tr>
|
|
|
117 |
<td>...</td>
|
118 |
<td>...</td>
|
119 |
<td>...</td>
|
120 |
+
<td>...</td>
|
121 |
</tr>
|
122 |
</table>
|
123 |
|
|
|
136 |
8_Yamaha
|
137 |
```
|
138 |
|
139 |
+
### Data Splits for Eval Subset
|
140 |
+
| Split | Eval | Eval |
|
141 |
+
| :-------------: | :---: | :---: |
|
142 |
+
| total | 18745 | 668 |
|
143 |
+
| train(80%) | 14996 | 534 |
|
144 |
+
| validation(10%) | 1874 | 67 |
|
145 |
+
| test(10%) | 1875 | 67 |
|
146 |
|
147 |
## Dataset Creation
|
148 |
### Curation Rationale
|
|
|
166 |
|
167 |
## Considerations for Using the Data
|
168 |
### Social Impact of Dataset
|
169 |
+
Help develop piano sound quality scoring apps
|
170 |
|
171 |
### Discussion of Biases
|
172 |
Only for pianos
|
|
|
179 |
Zijin Li
|
180 |
|
181 |
### Evaluation
|
182 |
+
[1] [Monan Zhou, Shangda Wu, Shaohua Ji, Zijin Li, and Wei Li. A Holistic Evaluation of Piano Sound Quality[C]//Proceedings of the 10th Conference on Sound and Music Technology (CSMT). Springer, Singapore, 2023.](https://arxiv.org/pdf/2310.04722.pdf)
|
183 |
|
184 |
+
(Note: this paper only uses the first 7 piano classes in the dataset, its future work has finished the 8-class evaluation)
|
185 |
|
186 |
### Licensing Information
|
187 |
```
|
|
|
209 |
```
|
210 |
|
211 |
### Citation Information
|
212 |
+
```bibtex
|
213 |
@dataset{zhaorui_liu_2021_5676893,
|
214 |
+
author = {Monan Zhou, Shenyang Xu, Zhaorui Liu, Zhaowen Wang, Feng Yu, Wei Li and Baoqiang Han},
|
215 |
+
title = {CCMusic: an Open and Diverse Database for Chinese and General Music Information Retrieval Research},
|
216 |
+
month = {mar},
|
217 |
+
year = {2024},
|
218 |
+
publisher = {HuggingFace},
|
219 |
+
version = {1.2},
|
220 |
+
url = {https://huggingface.co/ccmusic-database}
|
|
|
221 |
}
|
222 |
```
|
223 |
|
data/{pianos_data.zip → 3800.jpg}
RENAMED
File without changes
|
data/{pianos_rawdata.zip → 3800.wav}
RENAMED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:a875d995b0372d4a0a33a67a2b53032667898660162c30f23cf0655406034710
|
3 |
+
size 333604
|
data/TYYnuJqndeWzXLJMmOyXJ.jpeg
ADDED
Git LFS Details
|
pianos.py
CHANGED
@@ -1,8 +1,7 @@
|
|
1 |
import os
|
2 |
import random
|
3 |
-
import socket
|
4 |
import datasets
|
5 |
-
from datasets.tasks import ImageClassification
|
6 |
|
7 |
|
8 |
_NAMES = [
|
@@ -16,20 +15,21 @@ _NAMES = [
|
|
16 |
"Yamaha",
|
17 |
]
|
18 |
|
19 |
-
|
20 |
|
21 |
-
_HOMEPAGE = f"https://
|
|
|
|
|
22 |
|
23 |
_CITATION = """\
|
24 |
@dataset{zhaorui_liu_2021_5676893,
|
25 |
-
author = {
|
26 |
-
title = {
|
27 |
-
month = {
|
28 |
-
year = {
|
29 |
-
publisher = {
|
30 |
-
version = {1.
|
31 |
-
|
32 |
-
url = {https://doi.org/10.5281/zenodo.5676893}
|
33 |
}
|
34 |
"""
|
35 |
|
@@ -128,11 +128,26 @@ _PITCHES = {
|
|
128 |
"800": "c5",
|
129 |
}
|
130 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
131 |
|
132 |
class pianos(datasets.GeneratorBasedBuilder):
|
133 |
-
|
134 |
-
|
135 |
-
|
|
|
136 |
features=datasets.Features(
|
137 |
{
|
138 |
"mel": datasets.Image(),
|
@@ -143,9 +158,6 @@ class pianos(datasets.GeneratorBasedBuilder):
|
|
143 |
}
|
144 |
),
|
145 |
supervised_keys=("mel", "label"),
|
146 |
-
homepage=_HOMEPAGE,
|
147 |
-
license="mit",
|
148 |
-
citation=_CITATION,
|
149 |
task_templates=[
|
150 |
ImageClassification(
|
151 |
task="image-classification",
|
@@ -153,31 +165,74 @@ class pianos(datasets.GeneratorBasedBuilder):
|
|
153 |
label_column="label",
|
154 |
)
|
155 |
],
|
156 |
-
)
|
157 |
-
|
158 |
-
|
159 |
-
|
160 |
-
|
161 |
-
|
162 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
163 |
|
164 |
-
|
165 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
166 |
|
167 |
def _split_generators(self, dl_manager):
|
168 |
-
data_files = dl_manager.download_and_extract(self._cdn_url())
|
169 |
dataset = []
|
170 |
-
|
171 |
-
|
172 |
-
|
173 |
-
|
174 |
-
|
175 |
-
|
176 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
177 |
"label": os.path.basename(os.path.dirname(path)),
|
178 |
-
"pitch": _PITCHES[fname
|
179 |
}
|
180 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
181 |
|
182 |
random.shuffle(dataset)
|
183 |
count = len(dataset)
|
@@ -197,9 +252,19 @@ class pianos(datasets.GeneratorBasedBuilder):
|
|
197 |
]
|
198 |
|
199 |
def _generate_examples(self, files):
|
200 |
-
|
201 |
-
|
202 |
-
|
203 |
-
|
204 |
-
|
205 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
import os
|
2 |
import random
|
|
|
3 |
import datasets
|
4 |
+
from datasets.tasks import ImageClassification, AudioClassification
|
5 |
|
6 |
|
7 |
_NAMES = [
|
|
|
15 |
"Yamaha",
|
16 |
]
|
17 |
|
18 |
+
_DBNAME = os.path.basename(__file__).split(".")[0]
|
19 |
|
20 |
+
_HOMEPAGE = f"https://www.modelscope.cn/datasets/ccmusic/{_DBNAME}"
|
21 |
+
|
22 |
+
_DOMAIN = f"https://www.modelscope.cn/api/v1/datasets/ccmusic/{_DBNAME}/repo?Revision=master&FilePath=data"
|
23 |
|
24 |
_CITATION = """\
|
25 |
@dataset{zhaorui_liu_2021_5676893,
|
26 |
+
author = {Monan Zhou, Shenyang Xu, Zhaorui Liu, Zhaowen Wang, Feng Yu, Wei Li and Zijin Li},
|
27 |
+
title = {CCMusic: an Open and Diverse Database for Chinese and General Music Information Retrieval Research},
|
28 |
+
month = {mar},
|
29 |
+
year = {2024},
|
30 |
+
publisher = {HuggingFace},
|
31 |
+
version = {1.2},
|
32 |
+
url = {https://huggingface.co/ccmusic-database}
|
|
|
33 |
}
|
34 |
"""
|
35 |
|
|
|
128 |
"800": "c5",
|
129 |
}
|
130 |
|
131 |
+
_URLS = {
|
132 |
+
"audio": f"{_DOMAIN}/audio.zip",
|
133 |
+
"mel": f"{_DOMAIN}/mel.zip",
|
134 |
+
"eval": f"{_DOMAIN}/eval.zip",
|
135 |
+
}
|
136 |
+
|
137 |
+
|
138 |
+
class pianos_Config(datasets.BuilderConfig):
|
139 |
+
def __init__(self, features, supervised_keys, task_templates, **kwargs):
|
140 |
+
super(pianos_Config, self).__init__(version=datasets.Version("0.1.2"), **kwargs)
|
141 |
+
self.features = features
|
142 |
+
self.supervised_keys = supervised_keys
|
143 |
+
self.task_templates = task_templates
|
144 |
+
|
145 |
|
146 |
class pianos(datasets.GeneratorBasedBuilder):
|
147 |
+
VERSION = datasets.Version("0.1.2")
|
148 |
+
BUILDER_CONFIGS = [
|
149 |
+
pianos_Config(
|
150 |
+
name="eval",
|
151 |
features=datasets.Features(
|
152 |
{
|
153 |
"mel": datasets.Image(),
|
|
|
158 |
}
|
159 |
),
|
160 |
supervised_keys=("mel", "label"),
|
|
|
|
|
|
|
161 |
task_templates=[
|
162 |
ImageClassification(
|
163 |
task="image-classification",
|
|
|
165 |
label_column="label",
|
166 |
)
|
167 |
],
|
168 |
+
),
|
169 |
+
pianos_Config(
|
170 |
+
name="default",
|
171 |
+
features=datasets.Features(
|
172 |
+
{
|
173 |
+
"audio": datasets.Audio(sampling_rate=22050),
|
174 |
+
"mel": datasets.Image(),
|
175 |
+
"label": datasets.features.ClassLabel(names=_NAMES),
|
176 |
+
"pitch": datasets.features.ClassLabel(
|
177 |
+
names=list(_PITCHES.values())
|
178 |
+
),
|
179 |
+
}
|
180 |
+
),
|
181 |
+
supervised_keys=("audio", "label"),
|
182 |
+
task_templates=[
|
183 |
+
AudioClassification(
|
184 |
+
task="audio-classification",
|
185 |
+
audio_column="audio",
|
186 |
+
label_column="label",
|
187 |
+
)
|
188 |
+
],
|
189 |
+
),
|
190 |
+
]
|
191 |
|
192 |
+
def _info(self):
|
193 |
+
return datasets.DatasetInfo(
|
194 |
+
description=_DESCRIPTION,
|
195 |
+
features=self.config.features,
|
196 |
+
homepage=_HOMEPAGE,
|
197 |
+
license="mit",
|
198 |
+
citation=_CITATION,
|
199 |
+
supervised_keys=self.config.supervised_keys,
|
200 |
+
task_templates=self.config.task_templates,
|
201 |
+
)
|
202 |
|
203 |
def _split_generators(self, dl_manager):
|
|
|
204 |
dataset = []
|
205 |
+
if self.config.name == "eval":
|
206 |
+
data_files = dl_manager.download_and_extract(_URLS["eval"])
|
207 |
+
for path in dl_manager.iter_files([data_files]):
|
208 |
+
fname = os.path.basename(path)
|
209 |
+
if fname.endswith(".jpg"):
|
210 |
+
dataset.append(
|
211 |
+
{
|
212 |
+
"mel": path,
|
213 |
+
"label": os.path.basename(os.path.dirname(path)),
|
214 |
+
"pitch": _PITCHES[fname.split("_")[0]],
|
215 |
+
}
|
216 |
+
)
|
217 |
+
else:
|
218 |
+
subset = {}
|
219 |
+
audio_files = dl_manager.download_and_extract(_URLS["audio"])
|
220 |
+
for path in dl_manager.iter_files([audio_files]):
|
221 |
+
fname = os.path.basename(path)
|
222 |
+
if fname.endswith(".wav"):
|
223 |
+
subset[fname.split(".")[0]] = {
|
224 |
+
"audio": path,
|
225 |
"label": os.path.basename(os.path.dirname(path)),
|
226 |
+
"pitch": _PITCHES[fname[1:4]],
|
227 |
}
|
228 |
+
|
229 |
+
mel_files = dl_manager.download_and_extract(_URLS["mel"])
|
230 |
+
for path in dl_manager.iter_files([mel_files]):
|
231 |
+
fname = os.path.basename(path)
|
232 |
+
if fname.endswith(".jpg"):
|
233 |
+
subset[fname.split(".")[0]]["mel"] = path
|
234 |
+
|
235 |
+
dataset = list(subset.values())
|
236 |
|
237 |
random.shuffle(dataset)
|
238 |
count = len(dataset)
|
|
|
252 |
]
|
253 |
|
254 |
def _generate_examples(self, files):
|
255 |
+
if self.config.name == "eval":
|
256 |
+
for i, path in enumerate(files):
|
257 |
+
yield i, {
|
258 |
+
"mel": path["mel"],
|
259 |
+
"label": path["label"],
|
260 |
+
"pitch": path["pitch"],
|
261 |
+
}
|
262 |
+
|
263 |
+
else:
|
264 |
+
for i, path in enumerate(files):
|
265 |
+
yield i, {
|
266 |
+
"audio": path["audio"],
|
267 |
+
"mel": path["mel"],
|
268 |
+
"label": path["label"],
|
269 |
+
"pitch": path["pitch"],
|
270 |
+
}
|