voidful commited on
Commit
e84484d
1 Parent(s): 944856e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +74 -23
README.md CHANGED
@@ -54,98 +54,149 @@ tags:
54
 
55
  ## Dataset Description
56
 
57
- - **Homepage:**
58
  https://github.com/DanielLin94144/DUAL-textless-SQA
59
- - **Repository:**
60
  https://github.com/DanielLin94144/DUAL-textless-SQA
61
- - **Paper:**
62
  https://arxiv.org/abs/2203.04911
63
- - **Leaderboard:**
64
- - **Point of Contact:**
 
 
 
65
 
66
  ### Dataset Summary
67
 
68
- [More Information Needed]
69
 
70
  ### Supported Tasks and Leaderboards
71
 
72
- [More Information Needed]
73
 
74
  ### Languages
75
 
76
- [More Information Needed]
77
 
78
  ## Dataset Structure
79
 
80
  ### Data Instances
81
 
82
- [More Information Needed]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
83
 
84
  ### Data Fields
85
 
86
- [More Information Needed]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
87
 
88
  ### Data Splits
89
 
90
- [More Information Needed]
91
 
92
  ## Dataset Creation
93
 
94
  ### Curation Rationale
95
 
96
- [More Information Needed]
97
 
98
  ### Source Data
99
 
 
 
100
  #### Initial Data Collection and Normalization
101
 
102
- [More Information Needed]
103
 
104
  #### Who are the source language producers?
105
 
106
- [More Information Needed]
107
 
108
  ### Annotations
109
 
110
  #### Annotation process
111
 
112
- [More Information Needed]
113
 
114
  #### Who are the annotators?
115
 
116
- [More Information Needed]
117
 
118
  ### Personal and Sensitive Information
119
 
120
- [More Information Needed]
121
 
122
  ## Considerations for Using the Data
123
 
124
  ### Social Impact of Dataset
125
 
126
- [More Information Needed]
127
 
128
  ### Discussion of Biases
129
 
130
- [More Information Needed]
131
 
132
  ### Other Known Limitations
133
 
134
- [More Information Needed]
135
 
136
  ## Additional Information
137
 
138
  ### Dataset Curators
139
 
140
- [More Information Needed]
141
 
142
  ### Licensing Information
143
 
144
- [More Information Needed]
145
 
146
  ### Citation Information
147
 
148
- [More Information Needed]
 
 
 
 
 
 
 
149
 
150
  ### Contributions
151
 
 
54
 
55
  ## Dataset Description
56
 
57
+ - Homepage:
58
  https://github.com/DanielLin94144/DUAL-textless-SQA
59
+ - Repository:
60
  https://github.com/DanielLin94144/DUAL-textless-SQA
61
+ - Paper:
62
  https://arxiv.org/abs/2203.04911
63
+ - Leaderboard:
64
+ - Point of Contact:
65
+
66
+ Download audio data: [https://huggingface.co/datasets/voidful/NMSQA/resolve/main/nmsqa_audio.tar.gz](https://huggingface.co/datasets/voidful/NMSQA/resolve/main/nmsqa_audio.tar.gz)
67
+ Unzip audio data: tar -xf archive.tar.gz
68
 
69
  ### Dataset Summary
70
 
71
+ The Natural Multi-speaker Spoken Question Answering (NMSQA) dataset is designed for the task of textless spoken question answering. It is based on the SQuAD dataset and contains spoken questions and passages. The dataset includes the original text, transcriptions, and audio files of the spoken content. This dataset is created to evaluate the performance of models on textless spoken question answering tasks.
72
 
73
  ### Supported Tasks and Leaderboards
74
 
75
+ The primary task supported by this dataset is textless spoken question answering, where the goal is to answer questions based on spoken passages without relying on textual information. The dataset can also be used for automatic speech recognition tasks.
76
 
77
  ### Languages
78
 
79
+ The dataset is in English.
80
 
81
  ## Dataset Structure
82
 
83
  ### Data Instances
84
 
85
+ Each instance in the dataset contains the following fields:
86
+
87
+ - id: Unique identifier for the instance
88
+ - title: The title of the passage
89
+ - context: The passage text
90
+ - question: The question text
91
+ - - answer_start: The start index of the answer in the text
92
+ - audio_full_answer_end: The end position of the audio answer in seconds
93
+ - audio_full_answer_start: The start position of the audio answer in seconds
94
+ - audio_full_neg_answer_end: The end position of the audio answer in seconds for an incorrect answer with the same words
95
+ - audio_full_neg_answer_start: The start position of the audio answer in seconds for an incorrect answer with the same words
96
+ - audio_segment_answer_end: The end position of the audio answer in seconds for the segment
97
+ - audio_segment_answer_start: The start position of the audio answer in seconds for the segment
98
+ - text: The answer text
99
+ - content_segment_audio_path: The audio path for the content segment
100
+ - content_full_audio_path: The complete audio path for the content
101
+ - content_audio_sampling_rate: The audio sampling rate
102
+ - content_audio_speaker: The audio speaker
103
+ - content_segment_text: The segment text of the content
104
+ - content_segment_normalized_text: The normalized text for generating audio
105
+ - question_audio_path: The audio path for the question
106
+ - question_audio_sampling_rate: The audio sampling rate
107
+ - question_audio_speaker: The audio speaker
108
+ - question_normalized_text: The normalized text for generating audio
109
 
110
  ### Data Fields
111
 
112
+ The dataset includes the following data fields:
113
+
114
+ - id
115
+ - title
116
+ - context
117
+ - question
118
+ - answers
119
+ - content_segment_audio_path
120
+ - content_full_audio_path
121
+ - content_audio_sampling_rate
122
+ - content_audio_speaker
123
+ - content_segment_text
124
+ - content_segment_normalized_text
125
+ - question_audio_path
126
+ - question_audio_sampling_rate
127
+ - question_audio_speaker
128
+ - question_normalized_text
129
 
130
  ### Data Splits
131
 
132
+ The dataset is split into train, dev, and test sets.
133
 
134
  ## Dataset Creation
135
 
136
  ### Curation Rationale
137
 
138
+ The NMSQA dataset is created to address the challenge of textless spoken question answering, where the model must answer questions based on spoken passages without relying on textual information.
139
 
140
  ### Source Data
141
 
142
+ The NMSQA dataset is based on the SQuAD dataset, with spoken questions and passages created from the original text data.
143
+
144
  #### Initial Data Collection and Normalization
145
 
146
+ The initial data collection involved converting the original SQuAD dataset's text-based questions and passages into spoken audio files. The text was first normalized, and then audio files were generated using text-to-speech methods.
147
 
148
  #### Who are the source language producers?
149
 
150
+ The source language producers are the creators of the SQuAD dataset and the researchers who generated the spoken audio files for the NMSQA dataset.
151
 
152
  ### Annotations
153
 
154
  #### Annotation process
155
 
156
+ The annotations for the NMSQA dataset are derived from the original SQuAD dataset. Additional annotations, such as audio start and end positions for correct and incorrect answers, as well as audio file paths and speaker information, are added by the dataset creators.
157
 
158
  #### Who are the annotators?
159
 
160
+ The annotators for the NMSQA dataset are the creators of the SQuAD dataset and the researchers who generated the spoken audio files and additional annotations for the NMSQA dataset.
161
 
162
  ### Personal and Sensitive Information
163
 
164
+ The dataset does not contain any personal or sensitive information.
165
 
166
  ## Considerations for Using the Data
167
 
168
  ### Social Impact of Dataset
169
 
170
+ The NMSQA dataset contributes to the development and evaluation of models for textless spoken question answering tasks, which can lead to advancements in natural language processing and automatic speech recognition. Applications of these technologies can improve accessibility and convenience in various domains, such as virtual assistants, customer service, and voice-controlled devices.
171
 
172
  ### Discussion of Biases
173
 
174
+ The dataset inherits potential biases from the original SQuAD dataset, which may include biases in the selection of passages, questions, and answers. Additionally, biases may be introduced in the text-to-speech process and the choice of speakers used to generate the spoken audio files.
175
 
176
  ### Other Known Limitations
177
 
178
+ As the dataset is based on the SQuAD dataset, it shares the same limitations, including the fact that it is limited to the English language and mainly focuses on factual questions. Furthermore, the dataset may not cover a wide range of accents, dialects, or speaking styles.
179
 
180
  ## Additional Information
181
 
182
  ### Dataset Curators
183
 
184
+ The NMSQA dataset is curated by Guan-Ting Lin, Yung-Sung Chuang, Ho-Lam Chung, Shu-Wen Yang, Hsuan-Jui Chen, Shang-Wen Li, Abdelrahman Mohamed, Hung-Yi Lee, and Lin-Shan Lee.
185
 
186
  ### Licensing Information
187
 
188
+ The licensing information for the dataset is not explicitly mentioned.
189
 
190
  ### Citation Information
191
 
192
+ ```css
193
+ @article{lin2022dual,
194
+ title={DUAL: Textless Spoken Question Answering with Speech Discrete Unit Adaptive Learning},
195
+ author={Lin, Guan-Ting and Chuang, Yung-Sung and Chung, Ho-Lam and Yang, Shu-wen and Chen, Hsuan-Jui and Li, Shang-Wen and Mohamed, Abdelrahman and Lee, Hung-yi and Lee, Lin-shan},
196
+ journal={arXiv preprint arXiv:2203.04911},
197
+ year={2022}
198
+ }
199
+ ```
200
 
201
  ### Contributions
202