File size: 16,885 Bytes
7369105
 
 
 
 
671c48e
 
 
7a180d3
7369105
7b78587
7369105
2fcf121
7f3fb6c
2fcf121
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c5fcb4d
 
 
2fcf121
c5fcb4d
2fcf121
 
 
eb870ed
 
 
7f3fb6c
 
cfd2ee9
d116df0
c5fcb4d
6e0ba6f
eb870ed
c5fcb4d
 
29ac49f
67b6614
 
 
 
 
 
 
 
c5fcb4d
 
 
 
 
 
 
 
2fcf121
 
 
 
 
 
9b82032
 
 
2fcf121
 
 
6e0ba6f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2fcf121
 
9b82032
c5fcb4d
 
6e0ba6f
 
 
 
 
 
 
c5fcb4d
 
6e0ba6f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c5fcb4d
 
2fcf121
6e0ba6f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2fcf121
 
6e0ba6f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2fcf121
 
 
6e0ba6f
 
 
eb870ed
 
 
 
 
 
 
9b954f1
2fcf121
 
 
 
 
eb870ed
 
 
2fcf121
 
 
 
 
eb870ed
 
2fcf121
 
 
eb870ed
2fcf121
 
 
 
 
eb870ed
 
 
2fcf121
 
 
eb870ed
2fcf121
 
 
eb870ed
 
 
2fcf121
 
 
 
 
9b82032
2fcf121
 
 
9b82032
2fcf121
 
 
eb870ed
 
2fcf121
 
 
 
 
9b82032
2fcf121
 
 
 
 
 
 
eb870ed
 
 
 
6e0ba6f
eb870ed
 
 
 
 
2fcf121
 
 
862b6a2
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
---
language: 
- en
pretty_name: "ChaLL"
tags:
- error-preservation
- sla
- children
license: "apache-2.0" # todo
task_categories:
- automatic-speech-recognition
---

# Dataset Card for ChaLL

## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
  - [Dataset Summary](#dataset-summary)
  - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
  - [Languages](#languages)
- [Dataset Structure](#dataset-structure)
  - [Data Instances](#data-instances)
  - [Data Fields](#data-fields)
  - [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
  - [Curation Rationale](#curation-rationale)
  - [Source Data](#source-data)
  - [Annotations](#annotations)
  - [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
  - [Social Impact of Dataset](#social-impact-of-dataset)
  - [Discussion of Biases](#discussion-of-biases)
  - [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
  - [Dataset Curators](#dataset-curators)
  - [Licensing Information](#licensing-information)
  - [Citation Information](#citation-information)
  - [Contributions](#contributions)

## Dataset Description

- **Homepage:** https://github.com/mict-zhaw/chall_e2e_stt
- **Repository:** https://github.com/mict-zhaw/chall_e2e_stt
- **Paper:** tbd
- **Leaderboard:**
- **Point of Contact:** mict@zhaw.ch

### Dataset Summary

This dataset contains audio recordings of spontaneous speech by young learners of English in Switzerland.
The recordings capture various language learning tasks designed to elicit authentic communication from the students.
The dataset includes detailed verbatim transcriptions with annotations for errors made by the learners.
The transcripts were prepared by a professional transcription service, and each recording was associated with detailed metadata, including school grade, recording conditions, and error annotations.

> [!IMPORTANT]  
> <b>Data Availability</b>: The dataset that we collected contains sensitive data of minors and thus cannot be shared publicly. The
> data can, however, be accessed as part of a joint project with one or several of the original project
> partners, subject to a collaboration agreement (<b>yet to be detailed</b>).

To use the ChaLL dataset, you need to download it manually.
Once you have manually downloaded the files, please extract all files into a single folder.
You can then load the dataset into your environment using the following command:

```python
from datasets import load_dataset
dataset = load_dataset('chall', data_dir='path/to/folder/folder_name')
```

Ensure the path specified in `data_dir` correctly points to the folder where you have extracted the dataset files.

Examples in this dataset are generated using the `soundfile` library (for reading and chunking).
To handle the audio data correctly, you need to install the soundfile library in your project.

```shell
pip install soundfile
```


### Supported Tasks and Leaderboards

[More Information Needed]

### Languages

The primary language represented in this dataset is English, specifically as spoken by Swiss children who are learners of the language.
This includes a variety of accents and dialectal influences from the German-speaking regions of Switzerland.


## Dataset Structure

The dataset can be loaded using different configurations to suit various experimental setups.

### Dataset Builder Configuration

The configurations define how the data is preprocessed and loaded into the environment.
Below are the details of the configurations used in experiments:

#### `original`

This configuration uses the data in its raw, unmodified form while ensuring all participant information is anonymized.
It includes the preservation of the data's original structure without segmentation, filtering, or other preprocessing techniques. 

```python
from datasets import load_dataset
dataset = load_dataset('mict-zhaw/chall', 'original', data_dir='path/to/folder/folder_name')
```

#### `asr`

This configuration is intended for ASR experiments, enabling segment splitting for more granular processing of the audio data.


#### `asr_acl`

This configuration includes specific settings used in the related research paper. 
It is designed to handle various segmentation and preprocessing tasks to prepare the data.

The results for the paper were generated at a time when the data was not yet complete.
Thus, this dataset configuration comprises approximately 85 hours (excluding pauses between utterances) of spontaneous English speech recordings from young learners in Switzerland, collected from 327 distinct speakers in grades 4 to 6.
The dataset includes 45,004 individual utterances and is intended to train an ASR system that preserves learner errors for corrective feedback in language learning.

The configuration is set to split segments, with a maximum pause length of 12 seconds, maximum chunk length of 12 seconds, minimum chunk length of 0.5 seconds,
removes trailing pauses, converts text to lowercase and numbers to words.

```python
from datasets import load_dataset
dataset = load_dataset('mict-zhaw/chall', 'asr_acl', data_dir='path/to/folder/folder_name')
```
#### Custom

The `ChallConfig` class provides various parameters that can be customized:

- **split_segments (`bool`):** Whether to split the audio into smaller segments.
- **max_chunk_length (`float` or `None`)**: Maximum length of each audio chunk in seconds (used only if split_segments is True).
- **min_chunk_length (`float` or `None`)**: Minimum length of each audio chunk in seconds (used only if split_segments is True).
- **max_pause_length (`float` or None)**: Maximum allowable pause length within segments (used only if split_segments is True).
- **remove_trailing_pauses (`bool`)**: Whether to remove trailing pauses from segments (used only if split_segments is True).
- **lowercase (`bool`)**: Whether to convert all text to lowercase.
- **num_to_words (`bool`)**: Whether to convert numerical expressions to words.
- **allowed_chars (`set`)**: Set of allowed characters in the text. Automatically set based on the lowercase parameter.
- **special_terms_mapping (`dict`)**: Dictionary for mapping special terms to their replacements.
- **stratify_column (`str` or `None`)**: Column used for stratifying the data into different folds.
- **folds (`dict` or `None`)**: Dictionary defining the data folds for stratified sampling.

Custom configurations can be used alone or in combination with existing ones,
and they will overwrite predefined defaults.

```python
from datasets import load_dataset
dataset = load_dataset('mict-zhaw/chall', data_dir='path/to/folder/folder_name', **kwargs)
dataset = load_dataset('mict-zhaw/chall', 'asr_acl', data_dir='path/to/folder/folder_name', **kwargs)
```

### Data Instances

A typical data instance in this dataset include an audio file, its full transcription, error annotations, and associated metadata such as the speaker's grade level and recording conditions.
Here is an example:

#### `split_segments == True`

When `split_segments` is set to True, the audio data is divided into utterances.
An utterance data instance includes the spoken text from one participant along with meta information such as school_grade, area_of_school_code, background_noise, and intervention.
The audio is present as byte array under audio.


```json
{
  "audio_id": "S004_A005_000", 
  "intervention": 4, 
  "school_grade": "6", 
  "area_of_school_code": 5, 
  "background_noise": false, 
  "raw_text": "A male or is it a female?", 
  "clear_text": "a male or is it a female", 
  "words": {
    "start": [0.4099999964237213, 0.5600000023841858, 1.0399999618530273, 1.25, 1.3700000047683716, 1.5499999523162842, 1.6699999570846558],
    "end": [0.5400000214576721, 1.0399999618530273, 1.25, 1.3700000047683716, 1.5499999523162842, 1.6699999570846558, 2.5399999618530273],
    "duration": [0.1300000250339508, 0.47999995946884155, 0.21000003814697266, 0.12000000476837158, 0.1799999475479126, 0.12000000476837158, 0.8700000047683716],
    "text": ["A", "male", "or", "is", "it", "a", "female?"]
  }, 
  "audio": {
    "path": false, 
    "array": [0, 0, 0, "...", 0, 0, 0], 
    "sampling_rate": 16000
  }
}
```

#### `split_segments == False`

When `split_segments` is set to False, the audio remains intact and includes multiple turns with one or more speakers.
In this case, additional participant meta information is present, but speakers (from the transcript) and participants cannot be aligned and do not need to match in number.
This means the transcription agency may define more than one speaker for a single participant.

```json
{
  "audio_id": "S001_A046",
  "intervention": 1,
  "school_grade": "4",
  "area_of_school_code": 2,
  "raw_text": "If you could have-have any superpower, what would it be? I would choose to have invincibility because when I'm invincible, I can't die or get hurt by anyone and I think this concept is very cool...",
  "clear_text": "if you could have have any superpower what would it be i would choose to have invincibility because when i'm invincible i can't die or get hurt by anyone and i think this concept is very cool...", 
  "participants": {
    "estimated_l2_proficiency": [null, null], 
    "gender": ["M", "F"], 
    "languages": ["NNS", "NNS"], 
    "pseudonym": ["P033", "P034"], 
    "school_grade": [6, 6], 
    "year_of_birth": [2010, 2011]
  }, 
  "background_noise": true, 
  "speakers": {
    "name": ["Participant 1", "Participant 2"], 
    "spkid": ["S002_A004_SPK0", "S002_A004_SPK1"]
  }, 
  "segments": {
      "speaker": ["S002_A004_SPK0", "S002_A004_SPK1", ...],
      "words": [
        {
          "start": [1.8799999952316284, 2.119999885559082, 3.2899999618530273, ...],
          "end": [2.119999885559082, 2.390000104904175, 3.859999895095825, ...],
          "duration": [0.2399998903274536, 0.2700002193450928, 0.5699999332427979, ...],
          "text": ["If", "you", "could", "have-have", "any", "superpower,", "what", "would", "it", "be?"]
        },       {
          "start": [10.760000228881836, 11.029999732971191, 11.420000076293945, ...],
          "end": [11.029999732971191, 11.420000076293945, 12.170000076293945, ...],
          "duration": [0.26999950408935547, 0.3900003433227539, 0.75, ...],
          "text": ["I", "would", "choose", "to", "have", "invincibility", "because", "when", "I'm", "invincible,", "I", "can't", "die", "or", "get", "hurt", "by", "anyone", "and", "I", "think", "this", "concept", "is", "very", "cool."]
        }
    ]
  }, 
  "audio": {
    "path": null,
    "array": [0, 0, 0, ..., 0, 0, 0], 
    "sampling_rate": 16000
  }
}
```


### Data Fields

- **audio_id**: A unique identifier for the audio recording.
- **intervention**: An integer representing the type or stage of intervention.
- **school_grade**: The grade level of the student(s) involved in the recording.
- **area_of_school_code**: A code representing a specific area within the school.
- **raw_text**: The raw transcription of the audio, capturing exactly what was spoken.
- **clear_text**: A cleaned version of the raw text, formatted for easier analysis.
- **background_noise**: A boolean indicating whether background noise is present in the recording.
- **audio**: An object containing the audio data and related information.
  - **path**: The file path of the audio recording (can be null).
  - **array**: An array representing the audio waveform data.
  - **sampling_rate**: The rate at which the audio was sampled, in Hz.

In addition to the common fields, there are specific fields depending on `split_segments`:

#### Utterance Data Instance (`True`)

- **words**: An object containing details about each word spoken in the utterance.
  - **start**: A list of start times for each word.
  - **end**: A list of end times for each word.
  - **duration**: A list of durations for each word.
  - **text**: A list of words spoken.

#### Audio Data Instance (`False`)

- **participants**: An object containing meta information about the participants in the recording.
  - **estimated_l2_proficiency**: A list of estimated language proficiency levels.
  - **gender**: A list of genders of the participants.
  - **languages**: A list of languages spoken by the participants.
  - **pseudonym**: A list of pseudonyms assigned to the participants.
  - **school_grade**: A list of school grades for each participant.
  - **year_of_birth**: A list of birth years for each participant.
- **speakers**: An object containing information about the speakers in the transcript.
  - **name**: A list of speaker names as identified in the transcript.
  - **spkid**: A list of speaker IDs.
- **segments**: An object containing details about each segment of the recording.
  - **speaker**: A list of speaker IDs for each segment.
  - **words**: A list of objects, each containing details about the words spoken in the segment.
    - **start**: A list of start times for each word in the segment.
    - **end**: A list of end times for each word in the segment.
    - **duration**: A list of durations for each word in the segment.
    - **text**: A list of words spoken in the segment.

### Data Splits

The data splits can define as part of the configuration using the `folds`.
Without specifying `folds` all data is loaded in the train split.

#### `asr_acl`

For the experiments in this paper, we split the dataset into five distinct folds of similar duration
(about 16h each), where each class (and therefore also each speaker) occurs in only one fold.
To simulate the use case of the ASR system being confronted with a new class of learners, each fold
contains data from a mix of grades. The following figure visualises the duration and grade distribution of each fold.

![Chall Folds](doc/chall_data_folds_v1.svg)

## Dataset Creation

### Curation Rationale

The dataset was created to address the need for ASR systems that can handle children’s spontaneous speech 
and preserve their errors to provide effective corrective feedback in language learning environments.


### Source Data

#### Initial Data Collection and Normalization

Audio data was collected from primary school students aged 9 to 14 years, performing language learning tasks in pairs, trios, or individually. The recordings were made at schools and universities, and detailed verbatim transcriptions were created by a transcription agency, following specific guidelines.


#### Who are the source language producers?

The source data producers include primary school students from German-speaking Switzerland, aged 9 to 14 years, participating in language learning activities.

### Annotations

#### Annotation process

The transcription and annotation process was outsourced to a transcription agency, following detailed guidelines for error annotation, 
including symbols for grammatical, lexical, and pronunciation errors, as well as German word usage.


#### Who are the annotators?

The annotators were professionals from a transcription agency, trained according to specific guidelines provided by the project team.

### Personal and Sensitive Information

The dataset contains audio recordings of minors.
All data was collected with informed consent from legal guardians, and recordings are anonymized to protect the identities of the participants.


## Considerations for Using the Data

### Social Impact of Dataset

The dataset supports the development of educational tools that could enhance language learning for children, providing an important resource for educational technology.

### Discussion of Biases

Given the specific demographic (Swiss German-speaking schoolchildren), the dataset may not generalize well to other forms of English or to speakers from different linguistic or cultural backgrounds.

### Other Known Limitations

The outsourcing of transcription and error annotations always poses a risk of yielding erroneous data, since most
transcribers are not trained in error annotation.

## Additional Information

### Dataset Curators

The dataset was curated by researchers at PHZH, UZH and Zhaw, with collaboration from local schools in Switzerland.

### Licensing Information

[More Information Needed]

### Citation Information

```bibtex
@inproceedings{
  anonymous2024errorpreserving,
  title={Error-preserving Automatic Speech Recognition of Young English Learners' Language},
  author={Janick Michot, Manuela Hürlimann, Jan Deriu, Luzia Sauer, Katsiaryna Mlynchyk, Mark Cieliebak},
  booktitle={The 62nd Annual Meeting of the Association for Computational Linguistics},
  year={2024},
  url={https://openreview.net/forum?id=XPIwvlqIfI}
}
```

### Contributions

Thanks to [@mict-zhaw](https://github.com/mict-zhaw) for adding this dataset.