MuGeminorum
commited on
Commit
•
bcd415f
1
Parent(s):
1e95c9a
sync ms
Browse files- .gitignore +1 -2
- README.md +84 -24
- chest_falsetto.py +121 -101
- data/{chestfalsetto_rawdata.zip → 0001_m_chest.jpg} +2 -2
- data/{chestfalsetto_data.zip → 48qPVDDIZe0ttsYXrTJEh.jpeg} +2 -2
- data/W8wy7pkYZtCt3lI5Oq39l.jpeg +3 -0
- data/zm0KorKYtmvOje8qmivHJ.jpeg +3 -0
.gitignore
CHANGED
@@ -1,4 +1,3 @@
|
|
1 |
rename.sh
|
2 |
test.py
|
3 |
-
*.wav
|
4 |
-
*.jpg
|
|
|
1 |
rename.sh
|
2 |
test.py
|
3 |
+
*.wav
|
|
README.md
CHANGED
@@ -15,9 +15,42 @@ viewer: false
|
|
15 |
---
|
16 |
|
17 |
# Dataset Card for Chest voice and Falsetto Dataset
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
18 |
## Maintenance
|
19 |
```bash
|
20 |
-
GIT_LFS_SKIP_SMUDGE=1 git
|
|
|
21 |
```
|
22 |
|
23 |
## Dataset Description
|
@@ -25,10 +58,10 @@ GIT_LFS_SKIP_SMUDGE=1 git@hf.co:datasets/ccmusic-database/chest_falsetto
|
|
25 |
- **Repository:** <https://huggingface.co/datasets/ccmusic-database/chest_falsetto>
|
26 |
- **Paper:** <https://doi.org/10.5281/zenodo.5676893>
|
27 |
- **Leaderboard:** <https://ccmusic-database.github.io/team.html>
|
28 |
-
- **Point of Contact:**
|
29 |
|
30 |
### Dataset Summary
|
31 |
-
|
32 |
|
33 |
### Supported Tasks and Leaderboards
|
34 |
Audio classification, singing method classification, voice classification
|
@@ -38,29 +71,28 @@ Chinese, English
|
|
38 |
|
39 |
## Dataset Structure
|
40 |
<style>
|
41 |
-
|
42 |
vertical-align: middle !important;
|
43 |
text-align: center;
|
44 |
}
|
45 |
-
|
46 |
text-align: center;
|
47 |
}
|
48 |
</style>
|
49 |
-
|
|
|
50 |
<tr>
|
51 |
-
<th>
|
52 |
-
<th>
|
53 |
-
<th>
|
54 |
-
<th>chroma(.jpg)</th>
|
55 |
<th>label</th>
|
56 |
<th>gender</th>
|
57 |
<th>singing_method</th>
|
58 |
</tr>
|
59 |
<tr>
|
60 |
-
<td><
|
61 |
-
<td><img src="
|
62 |
-
<td><img src="
|
63 |
-
<td><img src="https://cdn-uploads.huggingface.co/production/uploads/655e0a5b8c2d4379a71882a9/zm0KorKYtmvOje8qmivHJ.jpeg"></td>
|
64 |
<td>m_chest, m_falsetto, f_chest, f_falsetto</td>
|
65 |
<td>male, female</td>
|
66 |
<td>chest, falsetto</td>
|
@@ -72,6 +104,30 @@ Chinese, English
|
|
72 |
<td>...</td>
|
73 |
<td>...</td>
|
74 |
<td>...</td>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
75 |
<td>...</td>
|
76 |
</tr>
|
77 |
</table>
|
@@ -83,7 +139,12 @@ Chinese, English
|
|
83 |
m_chest, f_chest, m_falsetto, f_falsetto
|
84 |
|
85 |
### Data Splits
|
86 |
-
|
|
|
|
|
|
|
|
|
|
|
87 |
|
88 |
## Dataset Creation
|
89 |
### Curation Rationale
|
@@ -150,16 +211,15 @@ SOFTWARE.
|
|
150 |
```
|
151 |
|
152 |
### Citation Information
|
153 |
-
```
|
154 |
@dataset{zhaorui_liu_2021_5676893,
|
155 |
-
author = {
|
156 |
-
title = {
|
157 |
-
month = {
|
158 |
-
year = {
|
159 |
-
publisher = {
|
160 |
-
version = {1.
|
161 |
-
|
162 |
-
url = {https://doi.org/10.5281/zenodo.5676893}
|
163 |
}
|
164 |
```
|
165 |
|
|
|
15 |
---
|
16 |
|
17 |
# Dataset Card for Chest voice and Falsetto Dataset
|
18 |
+
The raw dataset comprises 1,280 monophonic singing audio files in .wav format (sample rate is 22,050 Hz), consisting of chest and falsetto voices performed, recorded, and annotated by students majoring in Vocal Music at the China Conservatory of Music. The chest voice is tagged as chest and the falsetto voice is tagged as falsetto. Additionally, the dataset includes the Mel spectrogram, Mel frequency cepstral coefficient (MFCC), and spectral characteristics of each audio segment, resulting in a total of 5,120 CSV files. The original dataset did not differentiate between male and female voices, an omission that is critical for accurately identifying chest and falsetto vocal techniques. To address this, we conducted a meticulous manual review and added gender annotations to the dataset. Besides the original content, the preprocessed version during the evaluation which will be detailed in section IV is also provided. This approach which provides two versions is applied to the two subsequent classification datasets that have not been evaluated as well: Music Genre Dataset, Bel Conto & Chinese Folk Singing Dataset.
|
19 |
+
|
20 |
+
### Eval Subset
|
21 |
+
```python
|
22 |
+
from modelscope.msdatasets import MsDataset
|
23 |
+
|
24 |
+
ds = MsDataset.load("ccmusic/chest_falsetto", subset_name="eval")
|
25 |
+
for item in ds["train"]:
|
26 |
+
print(item)
|
27 |
+
|
28 |
+
for item in ds["validation"]:
|
29 |
+
print(item)
|
30 |
+
|
31 |
+
for item in ds["test"]:
|
32 |
+
print(item)
|
33 |
+
```
|
34 |
+
|
35 |
+
### Raw Subset
|
36 |
+
```python
|
37 |
+
from modelscope.msdatasets import MsDataset
|
38 |
+
|
39 |
+
ds = MsDataset.load("ccmusic/chest_falsetto", subset_name="default")
|
40 |
+
for item in ds["train"]:
|
41 |
+
print(item)
|
42 |
+
|
43 |
+
for item in ds["validation"]:
|
44 |
+
print(item)
|
45 |
+
|
46 |
+
for item in ds["test"]:
|
47 |
+
print(item)
|
48 |
+
```
|
49 |
+
|
50 |
## Maintenance
|
51 |
```bash
|
52 |
+
GIT_LFS_SKIP_SMUDGE=1 git clone https://www.modelscope.cn/datasets/ccmusic/chest_falsetto.git
|
53 |
+
cd chest_falsetto
|
54 |
```
|
55 |
|
56 |
## Dataset Description
|
|
|
58 |
- **Repository:** <https://huggingface.co/datasets/ccmusic-database/chest_falsetto>
|
59 |
- **Paper:** <https://doi.org/10.5281/zenodo.5676893>
|
60 |
- **Leaderboard:** <https://ccmusic-database.github.io/team.html>
|
61 |
+
- **Point of Contact:** <https://www.modelscope.cn/datasets/ccmusic/chest_falsetto>
|
62 |
|
63 |
### Dataset Summary
|
64 |
+
For the pre-processed version, the audio clip was into 0.25 seconds and then transformed to Mel, CQT and Chroma spectrogram in .jpg format, resulting in 8,974 files. The chest/falsetto label for each file is given as one of the four classes: m chest, m falsetto, f chest, and f falsetto. The spectrogram, the chest/falsetto label and the gender label are combined into one data entry, with the first three columns representing the Mel, CQT and Chroma. The fourth and fifth columns are the chest/falsetto label and gender label, respectively. Additionally, the integrated dataset provides the function to shuffle and split the dataset into training, validation, and test sets in an 8:1:1 ratio. This dataset can be used for singing-related tasks such as singing gender classification or chest and falsetto voice classification.
|
65 |
|
66 |
### Supported Tasks and Leaderboards
|
67 |
Audio classification, singing method classification, voice classification
|
|
|
71 |
|
72 |
## Dataset Structure
|
73 |
<style>
|
74 |
+
.datastructure td {
|
75 |
vertical-align: middle !important;
|
76 |
text-align: center;
|
77 |
}
|
78 |
+
.datastructure th {
|
79 |
text-align: center;
|
80 |
}
|
81 |
</style>
|
82 |
+
### Eval Subset
|
83 |
+
<table class="datastructure">
|
84 |
<tr>
|
85 |
+
<th>mel(.jpg, 48000Hz)</th>
|
86 |
+
<th>cqt(.jpg, 48000Hz)</th>
|
87 |
+
<th>chroma(.jpg, 48000Hz)</th>
|
|
|
88 |
<th>label</th>
|
89 |
<th>gender</th>
|
90 |
<th>singing_method</th>
|
91 |
</tr>
|
92 |
<tr>
|
93 |
+
<td><img src="./data/W8wy7pkYZtCt3lI5Oq39l.jpeg"></td>
|
94 |
+
<td><img src="./data/48qPVDDIZe0ttsYXrTJEh.jpeg"></td>
|
95 |
+
<td><img src="./data/zm0KorKYtmvOje8qmivHJ.jpeg"></td>
|
|
|
96 |
<td>m_chest, m_falsetto, f_chest, f_falsetto</td>
|
97 |
<td>male, female</td>
|
98 |
<td>chest, falsetto</td>
|
|
|
104 |
<td>...</td>
|
105 |
<td>...</td>
|
106 |
<td>...</td>
|
107 |
+
</tr>
|
108 |
+
</table>
|
109 |
+
|
110 |
+
### Raw Subset
|
111 |
+
<table class="datastructure">
|
112 |
+
<tr>
|
113 |
+
<th>audio(.wav, 22050Hz)</th>
|
114 |
+
<th>mel(spectrogram, .jpg, 22050Hz)</th>
|
115 |
+
<th>label(4-class)</th>
|
116 |
+
<th>gender(2-class)</th>
|
117 |
+
<th>singing_method(2-class)</th>
|
118 |
+
</tr>
|
119 |
+
<tr>
|
120 |
+
<td><audio controls src="https://cdn-uploads.huggingface.co/production/uploads/655e0a5b8c2d4379a71882a9/LKSBb11kCyPl15b-DJo6V.wav"></audio></td>
|
121 |
+
<td><img src="./data/0001_m_chest.jpg"></td>
|
122 |
+
<td>m_chest, m_falsetto, f_chest, f_falsetto</td>
|
123 |
+
<td>male, female</td>
|
124 |
+
<td>chest, falsetto</td>
|
125 |
+
</tr>
|
126 |
+
<tr>
|
127 |
+
<td>...</td>
|
128 |
+
<td>...</td>
|
129 |
+
<td>...</td>
|
130 |
+
<td>...</td>
|
131 |
<td>...</td>
|
132 |
</tr>
|
133 |
</table>
|
|
|
139 |
m_chest, f_chest, m_falsetto, f_falsetto
|
140 |
|
141 |
### Data Splits
|
142 |
+
| Split | Eval | Raw |
|
143 |
+
| :-------------: | :---: | :---: |
|
144 |
+
| total | 8974 | 1280 |
|
145 |
+
| train(80%) | 7179 | 1024 |
|
146 |
+
| validation(10%) | 897 | 128 |
|
147 |
+
| test(10%) | 898 | 128 |
|
148 |
|
149 |
## Dataset Creation
|
150 |
### Curation Rationale
|
|
|
211 |
```
|
212 |
|
213 |
### Citation Information
|
214 |
+
```bibtex
|
215 |
@dataset{zhaorui_liu_2021_5676893,
|
216 |
+
author = {Monan Zhou, Shenyang Xu, Zhaorui Liu, Zhaowen Wang, Feng Yu, Wei Li and Baoqiang Han},
|
217 |
+
title = {CCMusic: an Open and Diverse Database for Chinese and General Music Information Retrieval Research},
|
218 |
+
month = {mar},
|
219 |
+
year = {2024},
|
220 |
+
publisher = {HuggingFace},
|
221 |
+
version = {1.2},
|
222 |
+
url = {https://huggingface.co/ccmusic-database}
|
|
|
223 |
}
|
224 |
```
|
225 |
|
chest_falsetto.py
CHANGED
@@ -1,116 +1,132 @@
|
|
1 |
import os
|
2 |
-
import socket
|
3 |
import random
|
4 |
import datasets
|
5 |
-
from datasets.tasks import
|
6 |
|
7 |
_NAMES = {
|
8 |
-
|
9 |
-
|
10 |
-
|
11 |
}
|
12 |
|
13 |
-
|
14 |
|
15 |
-
_HOMEPAGE = f"https://
|
|
|
|
|
16 |
|
17 |
_CITATION = """\
|
18 |
@dataset{zhaorui_liu_2021_5676893,
|
19 |
-
author = {
|
20 |
-
title = {
|
21 |
-
month = {
|
22 |
-
year = {
|
23 |
-
publisher = {
|
24 |
-
version = {1.
|
25 |
-
|
26 |
-
url = {https://doi.org/10.5281/zenodo.5676893}
|
27 |
}
|
28 |
"""
|
29 |
|
30 |
_DESCRIPTION = """\
|
31 |
-
|
32 |
-
|
33 |
-
|
34 |
-
for
|
35 |
"""
|
36 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
37 |
|
38 |
class chest_falsetto(datasets.GeneratorBasedBuilder):
|
39 |
-
|
40 |
-
|
|
|
|
|
41 |
features=datasets.Features(
|
42 |
{
|
43 |
-
"audio": datasets.Audio(sampling_rate=44_100),
|
44 |
"mel": datasets.Image(),
|
45 |
"cqt": datasets.Image(),
|
46 |
"chroma": datasets.Image(),
|
47 |
-
"label": datasets.features.ClassLabel(names=_NAMES[
|
48 |
-
"gender": datasets.features.ClassLabel(names=_NAMES[
|
49 |
-
"singing_method": datasets.features.ClassLabel(
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
50 |
}
|
51 |
),
|
52 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
53 |
homepage=_HOMEPAGE,
|
54 |
license="mit",
|
55 |
citation=_CITATION,
|
56 |
description=_DESCRIPTION,
|
57 |
task_templates=[
|
58 |
-
|
59 |
-
task="
|
60 |
-
|
61 |
label_column="label",
|
62 |
)
|
63 |
],
|
64 |
)
|
65 |
|
66 |
-
def _cdn_url(self, ip='127.0.0.1', port=80):
|
67 |
-
try:
|
68 |
-
# easy for local test
|
69 |
-
with socket.create_connection((ip, port), timeout=5):
|
70 |
-
return {
|
71 |
-
'image': f'http://{ip}/{_NAME}/data/data.zip',
|
72 |
-
'audio': f'http://{ip}/{_NAME}/data/raw_data.zip'
|
73 |
-
}
|
74 |
-
|
75 |
-
except (socket.timeout, socket.error):
|
76 |
-
return {
|
77 |
-
'image': f"{_HOMEPAGE}/resolve/main/data/data.zip",
|
78 |
-
'audio': f"{_HOMEPAGE}/resolve/main/data/raw_data.zip"
|
79 |
-
}
|
80 |
-
|
81 |
def _split_generators(self, dl_manager):
|
82 |
-
|
83 |
-
|
84 |
-
|
85 |
-
|
86 |
-
|
87 |
-
|
88 |
-
|
89 |
-
|
90 |
-
|
91 |
-
|
92 |
-
|
93 |
-
|
94 |
-
|
95 |
-
|
96 |
-
|
97 |
-
|
98 |
-
|
99 |
-
|
100 |
-
|
101 |
-
|
102 |
-
|
103 |
-
|
104 |
-
|
105 |
-
|
106 |
-
|
107 |
-
elif 'cqt' in dirname:
|
108 |
-
data[fname]['cqt'] = jpg_file
|
109 |
-
|
110 |
-
elif 'chroma' in dirname:
|
111 |
-
data[fname]['chroma'] = jpg_file
|
112 |
|
113 |
-
dataset = list(data.values())
|
114 |
random.shuffle(dataset)
|
115 |
data_count = len(dataset)
|
116 |
p80 = int(data_count * 0.8)
|
@@ -118,36 +134,40 @@ class chest_falsetto(datasets.GeneratorBasedBuilder):
|
|
118 |
|
119 |
return [
|
120 |
datasets.SplitGenerator(
|
121 |
-
name=datasets.Split.TRAIN,
|
122 |
-
gen_kwargs={
|
123 |
-
"files": dataset[:p80]
|
124 |
-
}
|
125 |
),
|
126 |
datasets.SplitGenerator(
|
127 |
-
name=datasets.Split.VALIDATION,
|
128 |
-
gen_kwargs={
|
129 |
-
"files": dataset[p80:p90]
|
130 |
-
}
|
131 |
),
|
132 |
datasets.SplitGenerator(
|
133 |
-
name=datasets.Split.TEST,
|
134 |
-
|
135 |
-
"files": dataset[p90:]
|
136 |
-
}
|
137 |
-
)
|
138 |
]
|
139 |
|
140 |
def _generate_examples(self, files):
|
141 |
-
|
142 |
-
|
143 |
-
|
144 |
-
|
145 |
-
|
146 |
-
|
147 |
-
|
148 |
-
|
149 |
-
|
150 |
-
|
151 |
-
|
152 |
-
|
153 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
import os
|
|
|
2 |
import random
|
3 |
import datasets
|
4 |
+
from datasets.tasks import ImageClassification
|
5 |
|
6 |
_NAMES = {
|
7 |
+
"all": ["m_chest", "f_chest", "m_falsetto", "f_falsetto"],
|
8 |
+
"gender": ["female", "male"],
|
9 |
+
"singing_method": ["falsetto", "chest"],
|
10 |
}
|
11 |
|
12 |
+
_DBNAME = os.path.basename(__file__).split(".")[0]
|
13 |
|
14 |
+
_HOMEPAGE = f"https://www.modelscope.cn/datasets/ccmusic/{_DBNAME}"
|
15 |
+
|
16 |
+
_DOMAIN = f"https://www.modelscope.cn/api/v1/datasets/ccmusic/{_DBNAME}/repo?Revision=master&FilePath=data"
|
17 |
|
18 |
_CITATION = """\
|
19 |
@dataset{zhaorui_liu_2021_5676893,
|
20 |
+
author = {Monan Zhou, Shenyang Xu, Zhaorui Liu, Zhaowen Wang, Feng Yu, Wei Li and Zijin Li},
|
21 |
+
title = {CCMusic: an Open and Diverse Database for Chinese and General Music Information Retrieval Research},
|
22 |
+
month = {mar},
|
23 |
+
year = {2024},
|
24 |
+
publisher = {HuggingFace},
|
25 |
+
version = {1.2},
|
26 |
+
url = {https://huggingface.co/ccmusic-database}
|
|
|
27 |
}
|
28 |
"""
|
29 |
|
30 |
_DESCRIPTION = """\
|
31 |
+
The raw dataset comprises 1,280 monophonic singing audio files in .wav format (sample rate is 44,100 Hz), consisting of chest and falsetto voices performed, recorded, and annotated by students majoring in Vocal Music at the China Conservatory of Music. The chest voice is tagged as chest and the falsetto voice is tagged as falsetto. Additionally, the dataset includes the Mel spectrogram, Mel frequency cepstral coefficient (MFCC), and spectral characteristics of each audio segment, resulting in a total of 5,120 CSV files.
|
32 |
+
The original dataset did not differentiate between male and female voices, an omission that is critical for accurately identifying chest and falsetto vocal techniques. To address this, we conducted a meticulous manual review and added gender annotations to the dataset. Besides the original content, the preprocessed version during the evaluation which will be detailed in section IV is also provided. This approach which provides two versions is applied to the two subsequent classification datasets that have not been evaluated as well: Music Genre Dataset, Bel Conto & Chinese Folk Singing Dataset.
|
33 |
+
|
34 |
+
For the pre-processed version, the audio clip was into 0.25 seconds and then transformed to Mel, CQT and Chroma spectrogram in .jpg format, resulting in 8,974 files. The chest/falsetto label for each file is given as one of the four classes: m chest, m falsetto, f chest, and f falsetto. The spectrogram, the chest/falsetto label and the gender label are combined into one data entry, with the first three columns representing the Mel, CQT and Chroma. The fourth and the fifth columns are chest/falsetto label and gender label, respectively. Additionally, the integrated dataset provides the function to shuffle and split the dataset into training, validation, and test sets in an 8:1:1 ratio. This dataset can be used for singing-related tasks such as singing gender classification or chest and falsetto voice classification.
|
35 |
"""
|
36 |
|
37 |
+
_URLS = {
|
38 |
+
"audio": f"{_DOMAIN}/audio.zip",
|
39 |
+
"mel": f"{_DOMAIN}/mel.zip",
|
40 |
+
"eval": f"{_DOMAIN}/eval.zip",
|
41 |
+
}
|
42 |
+
|
43 |
+
|
44 |
+
class chest_falsetto_Config(datasets.BuilderConfig):
|
45 |
+
def __init__(self, features, **kwargs):
|
46 |
+
super(chest_falsetto_Config, self).__init__(
|
47 |
+
version=datasets.Version("1.2.0"), **kwargs
|
48 |
+
)
|
49 |
+
self.features = features
|
50 |
+
|
51 |
|
52 |
class chest_falsetto(datasets.GeneratorBasedBuilder):
|
53 |
+
VERSION = datasets.Version("1.2.0")
|
54 |
+
BUILDER_CONFIGS = [
|
55 |
+
chest_falsetto_Config(
|
56 |
+
name="eval",
|
57 |
features=datasets.Features(
|
58 |
{
|
|
|
59 |
"mel": datasets.Image(),
|
60 |
"cqt": datasets.Image(),
|
61 |
"chroma": datasets.Image(),
|
62 |
+
"label": datasets.features.ClassLabel(names=_NAMES["all"]),
|
63 |
+
"gender": datasets.features.ClassLabel(names=_NAMES["gender"]),
|
64 |
+
"singing_method": datasets.features.ClassLabel(
|
65 |
+
names=_NAMES["singing_method"]
|
66 |
+
),
|
67 |
+
}
|
68 |
+
),
|
69 |
+
),
|
70 |
+
chest_falsetto_Config(
|
71 |
+
name="default",
|
72 |
+
features=datasets.Features(
|
73 |
+
{
|
74 |
+
"audio": datasets.Audio(sampling_rate=22050),
|
75 |
+
"mel": datasets.Image(),
|
76 |
+
"label": datasets.features.ClassLabel(names=_NAMES["all"]),
|
77 |
+
"gender": datasets.features.ClassLabel(names=_NAMES["gender"]),
|
78 |
+
"singing_method": datasets.features.ClassLabel(
|
79 |
+
names=_NAMES["singing_method"]
|
80 |
+
),
|
81 |
}
|
82 |
),
|
83 |
+
),
|
84 |
+
]
|
85 |
+
|
86 |
+
def _info(self):
|
87 |
+
return datasets.DatasetInfo(
|
88 |
+
features=self.config.features,
|
89 |
+
supervised_keys=("mel", "label"),
|
90 |
homepage=_HOMEPAGE,
|
91 |
license="mit",
|
92 |
citation=_CITATION,
|
93 |
description=_DESCRIPTION,
|
94 |
task_templates=[
|
95 |
+
ImageClassification(
|
96 |
+
task="image-classification",
|
97 |
+
image_column="mel",
|
98 |
label_column="label",
|
99 |
)
|
100 |
],
|
101 |
)
|
102 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
103 |
def _split_generators(self, dl_manager):
|
104 |
+
dataset = []
|
105 |
+
|
106 |
+
if self.config.name == "eval":
|
107 |
+
data_files = dl_manager.download_and_extract(_URLS["eval"])
|
108 |
+
for path in dl_manager.iter_files([data_files]):
|
109 |
+
if "mel" in path and os.path.basename(path).endswith(".jpg"):
|
110 |
+
dataset.append(path)
|
111 |
+
|
112 |
+
else:
|
113 |
+
files = {}
|
114 |
+
audio_files = dl_manager.download_and_extract(_URLS["audio"])
|
115 |
+
mel_files = dl_manager.download_and_extract(_URLS["mel"])
|
116 |
+
for path in dl_manager.iter_files([audio_files]):
|
117 |
+
fname = os.path.basename(path)
|
118 |
+
if fname.endswith(".wav"):
|
119 |
+
item_id = fname.split(".")[0]
|
120 |
+
files[item_id] = {"audio": path}
|
121 |
+
|
122 |
+
for path in dl_manager.iter_files([mel_files]):
|
123 |
+
fname = os.path.basename(path)
|
124 |
+
if fname.endswith(".jpg"):
|
125 |
+
item_id = fname.split(".")[0]
|
126 |
+
files[item_id]["mel"] = path
|
127 |
+
|
128 |
+
dataset = list(files.values())
|
|
|
|
|
|
|
|
|
|
|
129 |
|
|
|
130 |
random.shuffle(dataset)
|
131 |
data_count = len(dataset)
|
132 |
p80 = int(data_count * 0.8)
|
|
|
134 |
|
135 |
return [
|
136 |
datasets.SplitGenerator(
|
137 |
+
name=datasets.Split.TRAIN, gen_kwargs={"files": dataset[:p80]}
|
|
|
|
|
|
|
138 |
),
|
139 |
datasets.SplitGenerator(
|
140 |
+
name=datasets.Split.VALIDATION, gen_kwargs={"files": dataset[p80:p90]}
|
|
|
|
|
|
|
141 |
),
|
142 |
datasets.SplitGenerator(
|
143 |
+
name=datasets.Split.TEST, gen_kwargs={"files": dataset[p90:]}
|
144 |
+
),
|
|
|
|
|
|
|
145 |
]
|
146 |
|
147 |
def _generate_examples(self, files):
|
148 |
+
if self.config.name == "eval":
|
149 |
+
for i, fpath in enumerate(files):
|
150 |
+
file_name = os.path.basename(fpath)
|
151 |
+
sex = file_name.split("_")[1]
|
152 |
+
method = file_name.split("_")[2]
|
153 |
+
yield i, {
|
154 |
+
"mel": fpath,
|
155 |
+
"cqt": fpath.replace("mel", "cqt"),
|
156 |
+
"chroma": fpath.replace("mel", "chroma"),
|
157 |
+
"label": f"{sex}_{method}",
|
158 |
+
"gender": "male" if sex == "m" else "female",
|
159 |
+
"singing_method": method,
|
160 |
+
}
|
161 |
+
|
162 |
+
else:
|
163 |
+
for i, fpath in enumerate(files):
|
164 |
+
file_name = os.path.basename(fpath["audio"])
|
165 |
+
sex = file_name.split("_")[1]
|
166 |
+
method = file_name.split("_")[2].split(".")[0]
|
167 |
+
yield i, {
|
168 |
+
"audio": fpath["audio"],
|
169 |
+
"mel": fpath["mel"],
|
170 |
+
"label": f"{sex}_{method}",
|
171 |
+
"gender": "male" if sex == "m" else "female",
|
172 |
+
"singing_method": method,
|
173 |
+
}
|
data/{chestfalsetto_rawdata.zip → 0001_m_chest.jpg}
RENAMED
File without changes
|
data/{chestfalsetto_data.zip → 48qPVDDIZe0ttsYXrTJEh.jpeg}
RENAMED
File without changes
|
data/W8wy7pkYZtCt3lI5Oq39l.jpeg
ADDED
Git LFS Details
|
data/zm0KorKYtmvOje8qmivHJ.jpeg
ADDED
Git LFS Details
|