Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -1,127 +1,29 @@
|
|
1 |
-
|
2 |
-
|
3 |
-
|
4 |
-
|
5 |
-
|
6 |
-
|
7 |
-
|
8 |
-
|
9 |
-
|
10 |
-
|
11 |
-
|
12 |
-
|
13 |
-
|
14 |
-
|
15 |
-
|
16 |
-
|
17 |
-
|
18 |
-
|
19 |
-
|
20 |
-
|
21 |
-
|
22 |
-
|
23 |
-
|
24 |
-
|
25 |
-
|
26 |
-
|
27 |
-
|
28 |
-
|
29 |
-
|
30 |
-
- [Citation Information](#citation-information)
|
31 |
-
- [Contributions](#contributions)
|
32 |
-
|
33 |
-
## Dataset Description
|
34 |
-
|
35 |
-
- **Homepage:** https://www.nexdata.ai/datasets/45?source=Huggingface
|
36 |
-
- **Repository:**
|
37 |
-
- **Paper:**
|
38 |
-
- **Leaderboard:**
|
39 |
-
- **Point of Contact:**
|
40 |
-
|
41 |
-
### Dataset Summary
|
42 |
-
|
43 |
-
It collects 2,568 local Chinese from Henan, Shanxi, Sichuan, Hunan and Fujian. It is mandarin speech data with heavy accent. The recorded content is a sentence that the speaker freely answers according to the guiding questions.
|
44 |
-
|
45 |
-
For more details, please refer to the link: https://www.nexdata.ai/datasets/45?source=Huggingface
|
46 |
-
|
47 |
-
|
48 |
-
### Supported Tasks and Leaderboards
|
49 |
-
|
50 |
-
automatic-speech-recognition, audio-speaker-identification: The dataset can be used to train a model for Automatic Speech Recognition (ASR).
|
51 |
-
|
52 |
-
### Languages
|
53 |
-
|
54 |
-
Accented Mandarin
|
55 |
-
## Dataset Structure
|
56 |
-
|
57 |
-
### Data Instances
|
58 |
-
|
59 |
-
[More Information Needed]
|
60 |
-
|
61 |
-
### Data Fields
|
62 |
-
|
63 |
-
[More Information Needed]
|
64 |
-
|
65 |
-
### Data Splits
|
66 |
-
|
67 |
-
[More Information Needed]
|
68 |
-
|
69 |
-
## Dataset Creation
|
70 |
-
|
71 |
-
### Curation Rationale
|
72 |
-
|
73 |
-
[More Information Needed]
|
74 |
-
|
75 |
-
### Source Data
|
76 |
-
|
77 |
-
#### Initial Data Collection and Normalization
|
78 |
-
|
79 |
-
[More Information Needed]
|
80 |
-
|
81 |
-
#### Who are the source language producers?
|
82 |
-
|
83 |
-
[More Information Needed]
|
84 |
-
|
85 |
-
### Annotations
|
86 |
-
|
87 |
-
#### Annotation process
|
88 |
-
|
89 |
-
[More Information Needed]
|
90 |
-
|
91 |
-
#### Who are the annotators?
|
92 |
-
|
93 |
-
[More Information Needed]
|
94 |
-
|
95 |
-
### Personal and Sensitive Information
|
96 |
-
|
97 |
-
[More Information Needed]
|
98 |
-
|
99 |
-
## Considerations for Using the Data
|
100 |
-
|
101 |
-
### Social Impact of Dataset
|
102 |
-
|
103 |
-
[More Information Needed]
|
104 |
-
|
105 |
-
### Discussion of Biases
|
106 |
-
|
107 |
-
[More Information Needed]
|
108 |
-
|
109 |
-
### Other Known Limitations
|
110 |
-
|
111 |
-
[More Information Needed]
|
112 |
-
|
113 |
-
## Additional Information
|
114 |
-
|
115 |
-
### Dataset Curators
|
116 |
-
|
117 |
-
[More Information Needed]
|
118 |
-
|
119 |
-
### Licensing Information
|
120 |
-
|
121 |
-
Commerical License: https://drive.google.com/file/d/1saDCPm74D4UWfBL17VbkTsZLGfpOQj1J/view?usp=sharing
|
122 |
-
|
123 |
-
### Citation Information
|
124 |
-
|
125 |
-
[More Information Needed]
|
126 |
-
|
127 |
-
### Contributions
|
|
|
1 |
+
# Dataset Card for Nexdata/Chinese_Mandarin_Multi-emotional_Synthesis_Corpus
|
2 |
+
|
3 |
+
## Description
|
4 |
+
22 People - Chinese Mandarin Multi-emotional Synthesis Corpus. It is recorded by Chinese native speaker, covering different ages and genders. six emotional text, and the syllables, phonemes and tones are balanced. Professional phonetician participates in the annotation. It precisely matches with the research and development needs of the speech synthesis.
|
5 |
+
|
6 |
+
For more details, please refer to the link: https://www.nexdata.ai/datasets/1214?source=Huggingface
|
7 |
+
|
8 |
+
# Specifications
|
9 |
+
## Format
|
10 |
+
48,000Hz, 24bit, uncompressed wav, mono channel
|
11 |
+
## Recording environment
|
12 |
+
professional recording studio
|
13 |
+
## Recording content
|
14 |
+
seven emotions (happiness, anger, sadness, surprise, fear, disgust)
|
15 |
+
## Speaker
|
16 |
+
22 persons, different age groups and genders
|
17 |
+
## Device
|
18 |
+
microphone
|
19 |
+
## Language
|
20 |
+
Mandarin
|
21 |
+
## Annotation
|
22 |
+
word and pinyin transcription, prosodic boundary annotation
|
23 |
+
## Application scenarios
|
24 |
+
speech synthesis
|
25 |
+
## The amount of data
|
26 |
+
The amount of data for per person is 140 minutes, each emotion is 20 minutes
|
27 |
+
|
28 |
+
# Licensing Information
|
29 |
+
Commerical License
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|