File size: 16,017 Bytes
bdcf4c1
 
a5ea7e4
 
 
 
 
 
 
 
 
 
 
 
 
 
2fc82b5
 
a5ea7e4
2fc82b5
 
a5ea7e4
2fc82b5
 
 
 
079dae1
bdcf4c1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2588ac7
 
 
 
a5ea7e4
 
 
 
079dae1
 
 
 
bdcf4c1
92ac101
 
 
 
 
 
 
7215e45
 
 
92ac101
 
 
7215e45
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
92ac101
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6def75d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7215e45
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6def75d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7215e45
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6def75d
 
 
7215e45
 
 
6def75d
7215e45
 
 
 
 
 
 
 
 
6def75d
 
7215e45
6def75d
7215e45
6def75d
 
7215e45
 
6def75d
 
7215e45
 
 
 
6def75d
 
 
 
 
 
 
7215e45
 
 
 
 
 
 
 
 
92ac101
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
---
dataset_info:
- config_name: clean
  features:
  - name: audio
    dtype:
      audio:
        sampling_rate: 16000
  - name: text
    dtype: string
  - name: id
    dtype: string
  - name: session_id
    dtype: string
  splits:
  - name: train
    num_bytes: 2807175314.0
    num_examples: 39989
  - name: validation
    num_bytes: 256038049.0
    num_examples: 4161
  - name: test
    num_bytes: 253226827.0
    num_examples: 3813
  download_size: 3311132741
  dataset_size: 3316440190.0
- config_name: default
  features:
  - name: audio
    dtype:
      audio:
        sampling_rate: 16000
  - name: text
    dtype: string
  - name: id
    dtype: string
  - name: session_id
    dtype: string
  splits:
  - name: train
    num_bytes: 3341105554.6299996
    num_examples: 46583
  download_size: 3346820592
  dataset_size: 3341105554.6299996
configs:
- config_name: clean
  data_files:
  - split: train
    path: clean/train-*
  - split: validation
    path: clean/validation-*
  - split: test
    path: clean/test-*
- config_name: default
  data_files:
  - split: train
    path: data/train-*
---

# Dataset Card for code-switching yodas

<!-- Provide a quick summary of the dataset. -->

This dataset is derived from espnet/yodas, more details can be found here: https://huggingface.co/datasets/espnet/yodas

This is a subset of the zh000 subset of espnet/yodas dataset, which selects videos with Mandarin-English code-switching phenomenon.

Note that code-switching is only gauranteed per video rather than per utterance. Therefore, not every utterance in the dataset contains code-switching.

## Dataset Details

### Dataset Usage
The `default` config does not modify any text of the selected samples.
```python
from datasets import load_dataset
cs_yodas = load_dataset("georgechang8/code_switch_yodas_zh")
```
The `clean` config cleanses the text of the selected samples (as in the processing).
```python
from datasets import load_dataset
cs_yodas_clean = load_dataset("georgechang8/code_switch_yodas_zh", "clean")
```
```python
{'audio': {'path': 'GaUSbuZm5Ec-00207-00083809-00084143.wav',
  'array': array([-0.09082031,  0.01898193,  0.02850342, ...,  0.01419067,
          0.01391602,  0.01513672]),
  'sampling_rate': 16000},
 'text': '項明生,訂Agoda的項明生',
 'id': 'GaUSbuZm5Ec-00207-00083809-00084143',
 'session_id': 'GaUSbuZm5Ec'}
```

### Dataset Description

<!-- Provide a longer summary of what this dataset is. -->

- **Language(s):** Chinese, English
- **License:** CC-BY-3.0

### Dataset Sources [optional]

<!-- Provide the basic links for the dataset. -->

- **Repository:** https://huggingface.co/datasets/espnet/yodas

## Dataset Creation

#### Data Collection and Processing

<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->

1. Read the text content of clips of espnet/yodas
```python
import glob
import re
import pandas as pd
from pathlib import Path
from tqdm.auto import tqdm
from collections import defaultdict
from dataclasses import dataclass, asdict

@dataclass
class Video:
    name: str = ""
    shard: str = ""
    duration: float = 0
    content: str = ""

data = defaultdict(Video)
trange = tqdm(glob.glob("yodas/data/zh000/text/*.txt"))
for file in trange:
    shard = Path(file).stem
    with open(file, "r", encoding="utf8") as f:
        for m in re.finditer(r"(.{11})-\d{5}-\d{8}-(\d{8})\s+(.*)", f.read()):
            name = m.group(1)
            assert data[name].shard in ["", shard]
            data[name].shard = shard
            data[name].name = name
            data[name].duration = int(m.group(2)) / 100
            data[name].content += " " + m.group(3)
    trange.set_postfix(vids=len(data))

data_df = pd.DataFrame(map(asdict, data.values()))
```
2. Retain videos with chinese symbols
```python
import re
cjk_pattern = re.compile(
    # puncs \uff00-\uffef \u3000-\u303f
    r"[\u3400-\u4db5\u4e00-\u9fa5\u9fa6-\u9fbb\uf900-\ufa2d\ufa30-\ufa6a\ufa70-\ufad9\u2e80-\u2eff\u31c0-\u31ef\u2f00-\u2fdf\u2ff0-\u2fff\u3100-\u312f\u31a0-\u31bf\ufe10-\ufe1f\ufe30-\ufe4f\u2600-\u26ff\u2700-\u27bf\u3200-\u32ff\u3300-\u33ff]"
)
chinese_df = data_df[data_df['content'].apply(lambda x: cjk_pattern.search(x) is not None)]
```
3. Filter out videos with Pingyin's
```python
pinyin_pattern = re.compile(
    r'[üÜāáǎàōóǒòēéěèīíǐìūúǔùǖǘǚǜ]'
)
chinese_pin_df = chinese_df[chinese_df['content'].apply(lambda x: pinyin_pattern.search(x) is None)]
```
4. Retain videos with latin scripts
```python
az_pattern = re.compile(
    r"[a-zA-Z]+"
)
mixed_df = chinese_pin_df[chinese_pin_df['content'].apply(lambda x: az_pattern.search(x) is not None)]
```
5. Retain videos with punctuations
```python
punc_pattern = re.compile(
    r'[!?。,、·.,?!]'
)
mixed_punc_df = mixed_df[mixed_df['content'].apply(lambda x: punc_pattern.search(x) is not None)]
```
6. Sort by increasing proportion of chinese characters
```python
def func(x):
    return x.apply(lambda z: len(cjk_pattern.findall(z)) / len(z))
mixed_punc_df = mixed_punc_df.sort_values(by='content', key=func)
```
> This gives around 1000 videos left.

7. Save to csv to for manual inspection
```python
mixed_punc_df.to_csv('sanity.csv')
```
8. Manually inspect 0-500
  - NwRTR8mY-7A: mostly english
  - ASL3yEYC1IE, etc.: contains English translation for each line
  - Recurring creators whose content is not good code-switching: "天天開心","日向蓝子","笑花兒","关于麻将的职人","大濕:","朋友sisi","please my hero","金玲老師"
  - Manually pick exceptions to previous rule to add to accepted list
  - Recurring creators whose content is good code-switching: "我是小夫","久德電子","GL_TECH"
  - Most videos about: "U.S. stock market", "tech reviews" are accepted.

9. Quickly skim through 501-1000 (only 10 were picked)

> A total of 176 videos were picked in step 8 & 9

10. Extract selected video clips' audio
```python
from tqdm.auto import tqdm
from pathlib import path
import tarfile

with open("codeswitch.txt", "r") as f: # list of 176 picked video_ids
    codeswitch = set(map(str.strip, f.readlines()))
code_switch_data = data_df[data_df['name'].apply(lambda x: x in codeswitch)]

shard_names = {}
for name, shard in zip(
    code_switch_data['name'].tolist(),
    code_switch_data['shard'].tolist()
):
    if shard not in shard_names:
        shard_names[shard] = set()
    shard_names[shard].add(name)

def extract_wav_files(shard, output_dir):
    # Create the output directory if it doesn't exist
    tar_file_path = f"yodas/data/zh000/audio/{shard}.tar.gz"
    names = shard_names[shard]

    # Open the tar.gz file
    with tarfile.open(tar_file_path, 'r:gz') as tar:
        # Iterate through the contents of the tar file
        for member in tar.getmembers():
            # Check if the member is a WAV file
            video_id = re.search(r"(.{11})-\d{5}-\d{8}-\d{8}", member.name)
            if video_id and video_id.group(1) in names:
                # Extract the WAV file contents into the output directory
                output_path = Path(output_dir, Path(member.name).name)
                with open(output_path, 'wb') as output_file:
                    output_file.write(tar.extractfile(member).read())

output_dir = "./code_switch_yodas"
Path(output_dir).mkdir(exist_ok=True, parents=True)
for shard in tqdm(shard_names):
    extract_wav_files(shard, output_dir)
```
11. Publish the subset
```python
import datasets
from datasets import Dataset

audio_dataset = Dataset.from_dict({
    "audio": [
        f"{output_dir}/{clip_id}.wav"
        for clip_id in clip_ids
    ],
    "text": texts,
    "id": clip_ids,
    "session_id": [x[:11] for x in clip_ids]
})
audio_dataset = audio_dataset.cast_column("audio", datasets.features.Audio(sampling_rate=16000))
audio_dataset = audio_dataset.sort("id")
audio_dataset.push_to_hub(
    "georgechang8/code_switch_yodas_zh",
    commit_message="Initial commit",
    embed_external_files=True
)
```
#### Extra (without punctuations)
Doing step 1-10, but reverse step 5 to look for ones without punctuations, this yields a small extra set:
```python
extra_set = {
    "37s5xmYYSM8",
    "3ZVVBEugui4",
    "-zHxyIuEw-8",
    "Dngt6Ca8-3u",
    "zJcle9SO98Q",
    "murJVhx5dd0",
    "6hCLoOVtM5Y", # test
    "U-1tallz0hM",
    "wfCUHCYJgIU",
    "GrKoml8qb78",
    "YMTMTFpV7_M",
    "GJV0ZRzAARy",
    "BtMii9364Fg",
    "apK8JYOq6gI",
    "IF-GnMzu7y8",
    "0qJ61eujIVo",
    "Okq02I_jTcA",
    "hCnZlSbTht8",
    "rMk21JBTisE", # validation
    "s9qzwyIM3JI",
    "NBf6Z9R1r7I",
    "jIbc2Jzfa0g",
}
```
```
train:
20 videos
validation:
1 video
test:
1 video
DatasetDict({
    train: Dataset({
        features: ['audio', 'text', 'id', 'session_id'],
        num_rows: 5990
    })
    validation: Dataset({
        features: ['audio', 'text', 'id', 'session_id'],
        num_rows: 397
    })
    test: Dataset({
        features: ['audio', 'text', 'id', 'session_id'],
        num_rows: 282
    })
})
```

#### Data Cleaning
1. The video `Pew9CK74axu` is manually cleaned
```python
def filter_fn(batch):
    return (z == 'Pew9CK74axu' for z in batch['session_id'])

special_care = audio_dataset.filter(filter_fn, num_proc=8, batched=True)
with open("manual_edit.txt", "w", encoding="utf8") as f:
    for l in special_care['text']:
        f.write(l + "\n")
# manual cleaning ...
with open("manual_edit_finish.txt", "r", encoding="utf8") as f:
    lines = list(map(str.strip, f.readlines()))
replace_dict = {
    a: b 
    for a, b in zip(special_care['id'], lines)
}
def manual_edit(batch):
    texts = []
    for sid, orig in zip(batch['id'], batch['text']):
        texts += [replace_dict.get(sid, orig)]
    return {'text': texts}

audio_dataset_manual = audio_dataset.map(manual_edit, batched=True, num_proc=8)
```
2. Low log-prob filtering
Using whisper-medium to compute the logprob, then filter by a handpicked threshold `-3.5`
```python
# Get rid of low-prob videos
low_prob_set = {
    '9lQs7INyYBQ',
    'HezOD6XPr_M',
    'HfeLdctBVGY',
    'IzfrgOUd2Uc',
    'UFklIGGKWN0',
    '_x8LwaPRtCE',
    'eK9m6uCNN6Q',
    'erbZNpDMHN0',
    'l9BjfWr1_Pg',
    'nSStWkJtbR4',
    'wrEY_EzQEsy',
    '3Zed0NHrmxo',
    'r29FW7K4iok',
    'MgdQuY0-abI',
    'yHh4rM2KX5Q'
}
audio_dataset_manual = audio_dataset_manual.filter(lambda batch: [s not in low_prob_set for s in batch['session_id']], num_proc=2, batched=True)
# 176 - 14 = 161 videos
```
3. train/dev/test split
```python
from datasets import DatasetDict

validation_set = {
    "AyPua3Mi9FU",
    "r29FW7K4iok", # low prob
    "GaUSbuZm5Ec",
    "AKW9vmSy8lQ",
    "3Zed0NHrmxo", # low prob
    "ZHPFLOuT48u",
    "RiCN24FLVLk",
    "zrV_ZNWo8PQ",
    # "rMk21JBTisE", # new (no punc) ==> not in 'default' config
}
test_set = {
    "lH7bZ-8hF1o",
    "WF4ovtdi6wu",
    "MgdQuY0-abI", # low prob
    "yHh4rM2KX5Q", # low prob
    "e_cxHBDSqsM",
    "NO6985Bf_Ro",
    # "6hCLoOVtM5Y", # new (no punc) ==> not in 'default' config
}

def train_fn(batch):
    return (z not in (validation_set|test_set) for z in batch['session_id'])
def validation_fn(batch):
    return (z in validation_set for z in batch['session_id'])
def test_fn(batch):
    return (z in test_set for z in batch['session_id'])

audio_dataset_manual = DatasetDict(
    train=audio_dataset_manual.filter(train_fn, num_proc=2, batched=True),
    validation=audio_dataset_manual.filter(validation_fn, num_proc=2, batched=True),
    test=audio_dataset_manual.filter(test_fn, num_proc=2, batched=True)
)
```
Don't forget to merge with extra set
```python
from datasets import concatenate_datasets
ds_extra = load_dataset("georgechang8/code_switch_yodas_zh", "clean_extra") # no longer available
audio_dataset_manual = DatasetDict({
    split: concatenate_datasets([audio_dataset_manual[split], ds_extra[split]])
    for split in audio_dataset_manual
})
```
Do sanity check
```python
ds_full = audio_dataset_manual
for split in ds_full:
    print(split, len(set(ds_full[split]['id'])))
assert len(set(ds_full['train']['id']) & set(ds_full['validation']['id'])) == 0
assert len(set(ds_full['train']['id']) & set(ds_full['test']['id'])) == 0
assert len(set(ds_full['test']['id']) & set(ds_full['validation']['id'])) == 0
```

4. General cleansing pipeline
```python
import re
import html

def remove_emojies(text):
  # Ref: https://gist.github.com/Alex-Just/e86110836f3f93fe7932290526529cd1#gistcomment-3208085
  # Ref: https://en.wikipedia.org/wiki/Unicode_block
  EMOJI_PATTERN = re.compile(
    "["
    "\U0001F1E0-\U0001F1FF"  # flags (iOS)
    "\U0001F300-\U0001F5FF"  # symbols & pictographs
    "\U0001F600-\U0001F64F"  # emoticons
    "\U0001F680-\U0001F6FF"  # transport & map symbols
    "\U0001F700-\U0001F77F"  # alchemical symbols
    "\U0001F780-\U0001F7FF"  # Geometric Shapes Extended
    "\U0001F800-\U0001F8FF"  # Supplemental Arrows-C
    "\U0001F900-\U0001F9FF"  # Supplemental Symbols and Pictographs
    "\U0001FA00-\U0001FA6F"  # Chess Symbols
    "\U0001FA70-\U0001FAFF"  # Symbols and Pictographs Extended-A
    "\U00002702-\U000027B0"  # Dingbats
    "]"
  )
  text = re.sub(EMOJI_PATTERN, r' ', text)
  return text

def clean_transcripts(x):
    cjk = "[\u3400-\u4db5\u4e00-\u9fa5\u9fa6-\u9fbb\uf900-\ufa2d\ufa30-\ufa6a\ufa70-\ufad9\uff00-\uffef\u2e80-\u2eff\u3000-\u303f\u31c0-\u31ef\u2f00-\u2fdf\u2ff0-\u2fff\u3100-\u312f\u31a0-\u31bf\ufe10-\ufe1f\ufe30-\ufe4f\u2600-\u26ff\u2700-\u27bf\u3200-\u32ff\u3300-\u33ff]"
    x = html.unescape(x)
    x = remove_emojies(x)
    x = re.sub(r'\.{3,}', ' ', x)
    x = re.sub(r'…+', ' ', x)
    x = re.sub(r'\s+|^|$', '  ', x) # expanding space allows matching " uh uh" case
    x = re.sub(rf"({cjk}|\s)([Uu][mh]|U[MH])({cjk}|\s)", r"\1 \3", x) # uh/um surrounded by cjk or space
    x = re.sub(r"([HhEe]mm+|[HE]MM+)", " ", x) # hmm emm
    x = re.sub(fr"\*+({cjk}+|[A-Za-z]+)\*+", " ", x)  # *叹气* 
    x = re.sub(r'[呃嗯]+', ' ', x)  # 呃嗯
    def replace_except(pattern, repl, z, excs):
        for e, t in excs:
            z = z.replace(e, t)
        z = re.sub(pattern, repl, z)
        for e, t in excs:
            z = z.replace(t, e)
        return z
    # remove 恩 except for 恩桥 感恩 恩怨
    x = replace_except("恩", ' ', x, excs=[("感恩", "呃"),("恩桥", "嗯"),("恩怨", "emm")])
    x = re.sub(r'([^()]*)', ' ', x)  # remove (...)
    x = re.sub(r'[()]+', ' ', x) # remove isolated()
    x = re.sub(r"\s+", " ", x)
    # remove (...) except for 'Program Files (x86)'
    x = replace_except(r'\([^()]*\)', ' ', x, excs=[("Program Files (x86)", "呃")])
    x = re.sub(r'[()]+', ' ', x) # remove isolated ()
    puncs = r'[,?!。:;~?!,.:;~]'
    x = re.sub(rf'({puncs})(?:\s*\1)+', r'\1', x) # ??? -> ?
    x = re.sub(rf"\s+({puncs})", r'\1', x) # text , -> text,
    sp_puncs = r'[?!,.;]' # puncs with spaces
    x = re.sub(rf"({puncs}*{sp_puncs})([^\d])", r'\1 \2', x) # text!?cont -> text!? cont
    x = re.sub(rf"^[\s]*{puncs}+", "", x) # leading puncs
    x = re.sub(r"\s+", " ", x) # excess spaces
    return x.strip()

def clean_batch(batch):
    return {'text': [clean_transcripts(x) for x in batch['text']]}

audio_dataset_manual_clean = audio_dataset_manual.map(clean_batch, batched=True, num_proc=8)
```
4. Publish
```python
audio_dataset_manual_clean.push_to_hub(
    "georgechang8/code_switch_yodas_zh",
    config_name="clean",
    set_default=False,
    commit_message="Clean transcript",
    max_shard_size="1GB",
    embed_external_files=True,
)
```

## Limitations

<!-- This section is meant to convey both technical and sociotechnical limitations. -->

1. The filtering & hand-picking process might left out useful videos.
2. The transcriptions is not processed in any way, so might need further cleansing.

## Dataset Card Contact

Original dataset: https://huggingface.co/datasets/espnet/yodas
CS processing: Chih-Chiang Chang (cc.chang0828@gmail.com)