admin commited on
Commit
4b9c64a
1 Parent(s): 38f0621

upl scripts

Browse files
Files changed (3) hide show
  1. .gitignore +1 -0
  2. README.md +212 -1
  3. chest_falsetto.py +155 -0
.gitignore ADDED
@@ -0,0 +1 @@
 
 
1
+ rename.sh
README.md CHANGED
@@ -1,3 +1,214 @@
1
  ---
2
- license: mit
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license: cc-by-nc-nd-4.0
3
+ task_categories:
4
+ - audio-classification
5
+ language:
6
+ - zh
7
+ - en
8
+ tags:
9
+ - music
10
+ - art
11
+ pretty_name: Chest voice and Falsetto Dataset
12
+ size_categories:
13
+ - 1K<n<10K
14
+ viewer: false
15
  ---
16
+
17
+ # Dataset Card for Chest voice and Falsetto Dataset
18
+ The raw dataset, sourced from the [Chest Voice and Falsetto Dataset](https://ccmusic-database.github.io/en/database/ccm.html#shou3), includes 1,280 monophonic singing audio files in .wav format, performed, recorded, and annotated by students majoring in Vocal Music at the China Conservatory of Music. The chest voice is tagged as "chest" and the falsetto voice as "falsetto." Additionally, the dataset encompasses the Mel spectrogram, Mel frequency cepstral coefficient (MFCC), and spectral features of each audio segment, totaling 5,120 CSV files.
19
+
20
+ The raw dataset did not distinguish between male and female voices, a critical detail for accurately identifying chest and falsetto vocal techniques. To correct this, we undertook a careful manual review and added gender annotations to the dataset. Following this process, we constructed the `default subset` of the current integrated version of the dataset, viewable in [viewer](https://www.modelscope.cn/datasets/ccmusic-database/chest_falsetto/dataPeview).
21
+
22
+ As the default subset had not undergone evaluation, we created the `eval subset` from it to verify the integrated dataset's effectiveness and completed the evaluation, viewable at [chest_falsetto](https://www.modelscope.cn/models/ccmusic-database/chest_falsetto). Below is a brief overview of the data structure for each subset within the integrated dataset.
23
+
24
+ ## Dataset Structure
25
+ <style>
26
+ .datastructure td {
27
+ vertical-align: middle !important;
28
+ text-align: center;
29
+ }
30
+ .datastructure th {
31
+ text-align: center;
32
+ }
33
+ </style>
34
+
35
+ ### Default Subset
36
+ <table class="datastructure">
37
+ <tr>
38
+ <th>audio</th>
39
+ <th>mel (spectrogram)</th>
40
+ <th>label (4-class)</th>
41
+ <th>gender (2-class)</th>
42
+ <th>singing_method (2-class)</th>
43
+ </tr>
44
+ <tr>
45
+ <td>.wav, 22050Hz</td>
46
+ <td>.jpg, 22050Hz</td>
47
+ <td>m_chest, m_falsetto, f_chest, f_falsetto</td>
48
+ <td>male, female</td>
49
+ <td>chest, falsetto</td>
50
+ </tr>
51
+ <tr>
52
+ <td>...</td>
53
+ <td>...</td>
54
+ <td>...</td>
55
+ <td>...</td>
56
+ <td>...</td>
57
+ </tr>
58
+ </table>
59
+
60
+ ### Eval Subset
61
+ <table class="datastructure">
62
+ <tr>
63
+ <th>mel</th>
64
+ <th>cqt</th>
65
+ <th>chroma</th>
66
+ <th>label (4-class)</th>
67
+ <th>gender (2-class)</th>
68
+ <th>singing_method (2-class)</th>
69
+ </tr>
70
+ <tr>
71
+ <td>.jpg, 0.496s, 22050Hz</td>
72
+ <td>.jpg, 0.496s, 22050Hz</td>
73
+ <td>.jpg, 0.496s, 22050Hz</td>
74
+ <td>m_chest, m_falsetto, f_chest, f_falsetto</td>
75
+ <td>male, female</td>
76
+ <td>chest, falsetto</td>
77
+ </tr>
78
+ <tr>
79
+ <td>...</td>
80
+ <td>...</td>
81
+ <td>...</td>
82
+ <td>...</td>
83
+ <td>...</td>
84
+ <td>...</td>
85
+ </tr>
86
+ </table>
87
+
88
+ <img src="https://www.modelscope.cn/api/v1/datasets/ccmusic-database/chest_falsetto/repo?Revision=master&FilePath=.%2Fdata%2Ffalsetto.png&View=true">
89
+
90
+ ### Data Instances
91
+ .zip(.wav, .jpg)
92
+
93
+ ### Data Fields
94
+ m_chest, f_chest, m_falsetto, f_falsetto
95
+
96
+ ### Data Splits
97
+ | Split(6:2:2) / Subset | default & eval |
98
+ | :-------------------: | :-----------------: |
99
+ | train | 767 |
100
+ | validation | 256 |
101
+ | test | 257 |
102
+ | total | 1280 |
103
+ | total duration(s) | `640.0513605442178` |
104
+
105
+ ## Viewer
106
+ <https://www.modelscope.cn/datasets/ccmusic-database/chest_falsetto/dataPeview>
107
+
108
+ ## Usage
109
+ ### Default Subset
110
+ ```python
111
+ from datasets import load_dataset
112
+
113
+ ds = load_dataset("ccmusic-database/chest_falsetto", name="default")
114
+ for item in ds["train"]:
115
+ print(item)
116
+
117
+ for item in ds["validation"]:
118
+ print(item)
119
+
120
+ for item in ds["test"]:
121
+ print(item)
122
+ ```
123
+
124
+ ### Eval Subset
125
+ ```python
126
+ from datasets import load_dataset
127
+
128
+ ds = load_dataset("ccmusic-database/chest_falsetto", name="eval")
129
+ for item in ds["train"]:
130
+ print(item)
131
+
132
+ for item in ds["validation"]:
133
+ print(item)
134
+
135
+ for item in ds["test"]:
136
+ print(item)
137
+ ```
138
+
139
+ ## Maintenance
140
+ ```bash
141
+ GIT_LFS_SKIP_SMUDGE=1 git clone git@hf.co:datasets/ccmusic-database/chest_falsetto
142
+ cd chest_falsetto
143
+ ```
144
+
145
+ ## Dataset Description
146
+ - **Homepage:** <https://ccmusic-database.github.io>
147
+ - **Repository:** <https://huggingface.co/datasets/ccmusic-database/chest_falsetto>
148
+ - **Paper:** <https://doi.org/10.5281/zenodo.5676893>
149
+ - **Leaderboard:** <https://ccmusic-database.github.io/team.html>
150
+ - **Point of Contact:** <https://www.modelscope.cn/datasets/ccmusic-database/chest_falsetto>
151
+
152
+ ### Dataset Summary
153
+ For the pre-processed version, the audio clip was into 0.25 seconds and then transformed to Mel, CQT and Chroma spectrogram in .jpg format, resulting in 8,974 files. The chest/falsetto label for each file is given as one of the four classes: m chest, m falsetto, f chest, and f falsetto. The spectrogram, the chest/falsetto label and the gender label are combined into one data entry, with the first three columns representing the Mel, CQT and Chroma. The fourth and fifth columns are the chest/falsetto label and gender label, respectively. Additionally, the integrated dataset provides the function to shuffle and split the dataset into training, validation, and test sets in an 8:1:1 ratio. This dataset can be used for singing-related tasks such as singing gender classification or chest and falsetto voice classification.
154
+
155
+ ### Supported Tasks and Leaderboards
156
+ Audio classification, singing method classification, voice classification
157
+
158
+ ### Languages
159
+ Chinese, English
160
+
161
+ ## Dataset Creation
162
+ ### Curation Rationale
163
+ Lack of a dataset for Chest voice and Falsetto
164
+
165
+ ### Source Data
166
+ #### Initial Data Collection and Normalization
167
+ Zhaorui Liu, Monan Zhou
168
+
169
+ #### Who are the source language producers?
170
+ Students from CCMUSIC
171
+
172
+ ### Annotations
173
+ #### Annotation process
174
+ 1280 monophonic singing audio (.wav format) of chest and falsetto voices, with chest voice tagged as _chest_ and falsetto voice tagged as _falsetto_.
175
+
176
+ #### Who are the annotators?
177
+ Students from CCMUSIC
178
+
179
+ ### Personal and Sensitive Information
180
+ None
181
+
182
+ ## Considerations for Using the Data
183
+ ### Social Impact of Dataset
184
+ Promoting the development of AI in the music industry
185
+
186
+ ### Discussion of Biases
187
+ Only for chest and falsetto voices
188
+
189
+ ### Other Known Limitations
190
+ Recordings are cut into slices that are too short;
191
+ The CQT spectrum column has the problem of spectrum leakage, but because the original audio slice is too short, only 0.5s, it cannot effectively avoid this problem.
192
+
193
+ ## Additional Information
194
+ ### Dataset Curators
195
+ Zijin Li
196
+
197
+ ### Evaluation
198
+ <https://huggingface.co/ccmusic-database/chest_falsetto>
199
+
200
+ ### Citation Information
201
+ ```bibtex
202
+ @dataset{zhaorui_liu_2021_5676893,
203
+ author = {Monan Zhou, Shenyang Xu, Zhaorui Liu, Zhaowen Wang, Feng Yu, Wei Li and Baoqiang Han},
204
+ title = {CCMusic: an Open and Diverse Database for Chinese and General Music Information Retrieval Research},
205
+ month = {mar},
206
+ year = {2024},
207
+ publisher = {HuggingFace},
208
+ version = {1.2},
209
+ url = {https://huggingface.co/ccmusic-database}
210
+ }
211
+ ```
212
+
213
+ ### Contributions
214
+ Provide a dataset for distinguishing chest and falsetto voices
chest_falsetto.py ADDED
@@ -0,0 +1,155 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import random
3
+ import datasets
4
+ from datasets.tasks import ImageClassification
5
+
6
+ _NAMES = {
7
+ "all": ["m_chest", "f_chest", "m_falsetto", "f_falsetto"],
8
+ "gender": ["female", "male"],
9
+ "singing_method": ["falsetto", "chest"],
10
+ }
11
+
12
+ _DBNAME = os.path.basename(__file__).split(".")[0]
13
+
14
+ _HOMEPAGE = f"https://www.modelscope.cn/datasets/ccmusic-database/{_DBNAME}"
15
+
16
+ _DOMAIN = f"https://www.modelscope.cn/api/v1/datasets/ccmusic-database/{_DBNAME}/repo?Revision=master&FilePath=data"
17
+
18
+ _URLS = {
19
+ "audio": f"{_DOMAIN}/audio.zip",
20
+ "mel": f"{_DOMAIN}/mel.zip",
21
+ "eval": f"{_DOMAIN}/eval.zip",
22
+ }
23
+
24
+
25
+ class chest_falsetto(datasets.GeneratorBasedBuilder):
26
+ def _info(self):
27
+ return datasets.DatasetInfo(
28
+ features=(
29
+ datasets.Features(
30
+ {
31
+ "audio": datasets.Audio(sampling_rate=22050),
32
+ "mel": datasets.Image(),
33
+ "label": datasets.features.ClassLabel(names=_NAMES["all"]),
34
+ "gender": datasets.features.ClassLabel(names=_NAMES["gender"]),
35
+ "singing_method": datasets.features.ClassLabel(
36
+ names=_NAMES["singing_method"]
37
+ ),
38
+ }
39
+ )
40
+ if self.config.name == "default"
41
+ else datasets.Features(
42
+ {
43
+ "mel": datasets.Image(),
44
+ "cqt": datasets.Image(),
45
+ "chroma": datasets.Image(),
46
+ "label": datasets.features.ClassLabel(names=_NAMES["all"]),
47
+ "gender": datasets.features.ClassLabel(names=_NAMES["gender"]),
48
+ "singing_method": datasets.features.ClassLabel(
49
+ names=_NAMES["singing_method"]
50
+ ),
51
+ }
52
+ )
53
+ ),
54
+ supervised_keys=("mel", "label"),
55
+ homepage=_HOMEPAGE,
56
+ license="CC-BY-NC-ND",
57
+ version="1.2.0",
58
+ task_templates=[
59
+ ImageClassification(
60
+ task="image-classification",
61
+ image_column="mel",
62
+ label_column="label",
63
+ )
64
+ ],
65
+ )
66
+
67
+ def _split_generators(self, dl_manager):
68
+ dataset = []
69
+ if self.config.name == "default":
70
+ files = {}
71
+ audio_files = dl_manager.download_and_extract(_URLS["audio"])
72
+ mel_files = dl_manager.download_and_extract(_URLS["mel"])
73
+ for fpath in dl_manager.iter_files([audio_files]):
74
+ fname: str = os.path.basename(fpath)
75
+ if fname.endswith(".wav"):
76
+ item_id = fname.split(".")[0]
77
+ files[item_id] = {"audio": fpath}
78
+
79
+ for fpath in dl_manager.iter_files([mel_files]):
80
+ fname = os.path.basename(fpath)
81
+ if fname.endswith(".jpg"):
82
+ item_id = fname.split(".")[0]
83
+ files[item_id]["mel"] = fpath
84
+
85
+ dataset = list(files.values())
86
+
87
+ else:
88
+ data_files = dl_manager.download_and_extract(_URLS["eval"])
89
+ for fpath in dl_manager.iter_files([data_files]):
90
+ if "mel" in fpath and os.path.basename(fpath).endswith(".jpg"):
91
+ dataset.append(fpath)
92
+
93
+ categories = {}
94
+ for name in _NAMES["all"]:
95
+ categories[name] = []
96
+
97
+ for data in dataset:
98
+ fpath = data["audio"] if self.config.name == "default" else data
99
+ filename: str = os.path.basename(fpath)[:-4]
100
+ label = "_".join(filename.split("_")[1:3])
101
+ categories[label].append(data)
102
+
103
+ testset, validset, trainset = [], [], []
104
+ for cls in categories:
105
+ random.shuffle(categories[cls])
106
+ count = len(categories[cls])
107
+ p60 = int(count * 0.6)
108
+ p80 = int(count * 0.8)
109
+ trainset += categories[cls][:p60]
110
+ validset += categories[cls][p60:p80]
111
+ testset += categories[cls][p80:]
112
+
113
+ random.shuffle(trainset)
114
+ random.shuffle(validset)
115
+ random.shuffle(testset)
116
+
117
+ return [
118
+ datasets.SplitGenerator(
119
+ name=datasets.Split.TRAIN, gen_kwargs={"files": trainset}
120
+ ),
121
+ datasets.SplitGenerator(
122
+ name=datasets.Split.VALIDATION, gen_kwargs={"files": validset}
123
+ ),
124
+ datasets.SplitGenerator(
125
+ name=datasets.Split.TEST, gen_kwargs={"files": testset}
126
+ ),
127
+ ]
128
+
129
+ def _generate_examples(self, files):
130
+ if self.config.name == "default":
131
+ for i, fpath in enumerate(files):
132
+ file_name = os.path.basename(fpath["audio"])
133
+ sex = file_name.split("_")[1]
134
+ method = file_name.split("_")[2].split(".")[0]
135
+ yield i, {
136
+ "audio": fpath["audio"],
137
+ "mel": fpath["mel"],
138
+ "label": f"{sex}_{method}",
139
+ "gender": "male" if sex == "m" else "female",
140
+ "singing_method": method,
141
+ }
142
+
143
+ else:
144
+ for i, fpath in enumerate(files):
145
+ file_name: str = os.path.basename(fpath)
146
+ sex = file_name.split("_")[1]
147
+ method = file_name.split("_")[2]
148
+ yield i, {
149
+ "mel": fpath,
150
+ "cqt": fpath.replace("mel", "cqt"),
151
+ "chroma": fpath.replace("mel", "chroma"),
152
+ "label": f"{sex}_{method}",
153
+ "gender": "male" if sex == "m" else "female",
154
+ "singing_method": method,
155
+ }