Nexdata commited on
Commit
a825fe9
1 Parent(s): bca0bb2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +29 -127
README.md CHANGED
@@ -1,127 +1,29 @@
1
- ---
2
- YAML tags:
3
- - copy-paste the tags obtained with the tagging app: https://github.com/huggingface/datasets-tagging
4
- ---
5
-
6
- # Dataset Card for Nexdata/Mandarin_Heavy_Accent_Speech_Data_by_Mobile_Phone
7
-
8
- ## Table of Contents
9
- - [Table of Contents](#table-of-contents)
10
- - [Dataset Description](#dataset-description)
11
- - [Dataset Summary](#dataset-summary)
12
- - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
13
- - [Languages](#languages)
14
- - [Dataset Structure](#dataset-structure)
15
- - [Data Instances](#data-instances)
16
- - [Data Fields](#data-fields)
17
- - [Data Splits](#data-splits)
18
- - [Dataset Creation](#dataset-creation)
19
- - [Curation Rationale](#curation-rationale)
20
- - [Source Data](#source-data)
21
- - [Annotations](#annotations)
22
- - [Personal and Sensitive Information](#personal-and-sensitive-information)
23
- - [Considerations for Using the Data](#considerations-for-using-the-data)
24
- - [Social Impact of Dataset](#social-impact-of-dataset)
25
- - [Discussion of Biases](#discussion-of-biases)
26
- - [Other Known Limitations](#other-known-limitations)
27
- - [Additional Information](#additional-information)
28
- - [Dataset Curators](#dataset-curators)
29
- - [Licensing Information](#licensing-information)
30
- - [Citation Information](#citation-information)
31
- - [Contributions](#contributions)
32
-
33
- ## Dataset Description
34
-
35
- - **Homepage:** https://www.nexdata.ai/datasets/45?source=Huggingface
36
- - **Repository:**
37
- - **Paper:**
38
- - **Leaderboard:**
39
- - **Point of Contact:**
40
-
41
- ### Dataset Summary
42
-
43
- It collects 2,568 local Chinese from Henan, Shanxi, Sichuan, Hunan and Fujian. It is mandarin speech data with heavy accent. The recorded content is a sentence that the speaker freely answers according to the guiding questions.
44
-
45
- For more details, please refer to the link: https://www.nexdata.ai/datasets/45?source=Huggingface
46
-
47
-
48
- ### Supported Tasks and Leaderboards
49
-
50
- automatic-speech-recognition, audio-speaker-identification: The dataset can be used to train a model for Automatic Speech Recognition (ASR).
51
-
52
- ### Languages
53
-
54
- Accented Mandarin
55
- ## Dataset Structure
56
-
57
- ### Data Instances
58
-
59
- [More Information Needed]
60
-
61
- ### Data Fields
62
-
63
- [More Information Needed]
64
-
65
- ### Data Splits
66
-
67
- [More Information Needed]
68
-
69
- ## Dataset Creation
70
-
71
- ### Curation Rationale
72
-
73
- [More Information Needed]
74
-
75
- ### Source Data
76
-
77
- #### Initial Data Collection and Normalization
78
-
79
- [More Information Needed]
80
-
81
- #### Who are the source language producers?
82
-
83
- [More Information Needed]
84
-
85
- ### Annotations
86
-
87
- #### Annotation process
88
-
89
- [More Information Needed]
90
-
91
- #### Who are the annotators?
92
-
93
- [More Information Needed]
94
-
95
- ### Personal and Sensitive Information
96
-
97
- [More Information Needed]
98
-
99
- ## Considerations for Using the Data
100
-
101
- ### Social Impact of Dataset
102
-
103
- [More Information Needed]
104
-
105
- ### Discussion of Biases
106
-
107
- [More Information Needed]
108
-
109
- ### Other Known Limitations
110
-
111
- [More Information Needed]
112
-
113
- ## Additional Information
114
-
115
- ### Dataset Curators
116
-
117
- [More Information Needed]
118
-
119
- ### Licensing Information
120
-
121
- Commerical License: https://drive.google.com/file/d/1saDCPm74D4UWfBL17VbkTsZLGfpOQj1J/view?usp=sharing
122
-
123
- ### Citation Information
124
-
125
- [More Information Needed]
126
-
127
- ### Contributions
 
1
+ # Dataset Card for Nexdata/Chinese_Mandarin_Multi-emotional_Synthesis_Corpus
2
+
3
+ ## Description
4
+ 22 People - Chinese Mandarin Multi-emotional Synthesis Corpus. It is recorded by Chinese native speaker, covering different ages and genders. six emotional text, and the syllables, phonemes and tones are balanced. Professional phonetician participates in the annotation. It precisely matches with the research and development needs of the speech synthesis.
5
+
6
+ For more details, please refer to the link: https://www.nexdata.ai/datasets/1214?source=Huggingface
7
+
8
+ # Specifications
9
+ ## Format
10
+ 48,000Hz, 24bit, uncompressed wav, mono channel
11
+ ## Recording environment
12
+ professional recording studio
13
+ ## Recording content
14
+ seven emotions (happiness, anger, sadness, surprise, fear, disgust)
15
+ ## Speaker
16
+ 22 persons, different age groups and genders
17
+ ## Device
18
+ microphone
19
+ ## Language
20
+ Mandarin
21
+ ## Annotation
22
+ word and pinyin transcription, prosodic boundary annotation
23
+ ## Application scenarios
24
+ speech synthesis
25
+ ## The amount of data
26
+ The amount of data for per person is 140 minutes, each emotion is 20 minutes
27
+
28
+ # Licensing Information
29
+ Commerical License