georgechang8's picture
Filtered out low whisper avg_logprob < -3.5
13d1503 verified
|
raw
history blame
13 kB
---
dataset_info:
- config_name: 30s
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: text
dtype: string
- name: id
dtype: string
- name: condition_on_prev
dtype: string
- name: duration
dtype: float64
splits:
- name: train
num_bytes: 3194102757.33
num_examples: 3659
- name: validation
num_bytes: 410704018.0
num_examples: 462
- name: test
num_bytes: 355118867.0
num_examples: 398
download_size: 3379797146
dataset_size: 3959925642.33
- config_name: clean
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: text
dtype: string
- name: id
dtype: string
- name: session_id
dtype: string
splits:
- name: train
num_bytes: 2436576408.0
num_examples: 33802
- name: validation
num_bytes: 234816145.0
num_examples: 3764
- name: test
num_bytes: 235805821.0
num_examples: 3531
download_size: 2902210967
dataset_size: 2907198374.0
- config_name: default
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: text
dtype: string
- name: id
dtype: string
- name: session_id
dtype: string
splits:
- name: train
num_bytes: 3341105554.6299996
num_examples: 46583
download_size: 3346820592
dataset_size: 3341105554.6299996
configs:
- config_name: 30s
data_files:
- split: train
path: 30s/train-*
- split: validation
path: 30s/validation-*
- split: test
path: 30s/test-*
- config_name: clean
data_files:
- split: train
path: clean/train-*
- split: validation
path: clean/validation-*
- split: test
path: clean/test-*
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for code-switching yodas
<!-- Provide a quick summary of the dataset. -->
This dataset is derived from espnet/yodas, more details can be found here: https://huggingface.co/datasets/espnet/yodas
This is a subset of the zh000 subset of espnet/yodas dataset, which selects videos with Mandarin-English code-switching phenomenon.
Note that code-switching is only gauranteed per video rather than per utterance. Therefore, not every utterance in the dataset contains code-switching.
## Dataset Details
### Dataset Usage
The `default` config does not modify any text of the selected samples.
```python
from datasets import load_dataset
cs_yodas = load_dataset("georgechang8/code_switch_yodas_zh")
```
The `clean` config cleanses the text of the selected samples (as in the processing).
```python
from datasets import load_dataset
cs_yodas_clean = load_dataset("georgechang8/code_switch_yodas_zh", "clean")
```
```python
{'audio': {'path': 'GaUSbuZm5Ec-00207-00083809-00084143.wav',
'array': array([-0.09082031, 0.01898193, 0.02850342, ..., 0.01419067,
0.01391602, 0.01513672]),
'sampling_rate': 16000},
'text': '項明生,訂Agoda的項明生',
'id': 'GaUSbuZm5Ec-00207-00083809-00084143',
'session_id': 'GaUSbuZm5Ec'}
```
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Language(s):** Chinese, English
- **License:** CC-BY-3.0
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** https://huggingface.co/datasets/espnet/yodas
## Dataset Creation
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
1. Read the text content of clips of espnet/yodas
```python
import glob
import re
import pandas as pd
from pathlib import Path
from tqdm.auto import tqdm
from collections import defaultdict
from dataclasses import dataclass, asdict
@dataclass
class Video:
name: str = ""
shard: str = ""
duration: float = 0
content: str = ""
data = defaultdict(Video)
trange = tqdm(glob.glob("yodas/data/zh000/text/*.txt"))
for file in trange:
shard = Path(file).stem
with open(file, "r", encoding="utf8") as f:
for m in re.finditer(r"(.{11})-\d{5}-\d{8}-(\d{8})\s+(.*)", f.read()):
name = m.group(1)
assert data[name].shard in ["", shard]
data[name].shard = shard
data[name].name = name
data[name].duration = int(m.group(2)) / 100
data[name].content += " " + m.group(3)
trange.set_postfix(vids=len(data))
data_df = pd.DataFrame(map(asdict, data.values()))
```
2. Retain videos with chinese symbols
```python
import re
cjk_pattern = re.compile(
# puncs \uff00-\uffef \u3000-\u303f
r"[\u3400-\u4db5\u4e00-\u9fa5\u9fa6-\u9fbb\uf900-\ufa2d\ufa30-\ufa6a\ufa70-\ufad9\u2e80-\u2eff\u31c0-\u31ef\u2f00-\u2fdf\u2ff0-\u2fff\u3100-\u312f\u31a0-\u31bf\ufe10-\ufe1f\ufe30-\ufe4f\u2600-\u26ff\u2700-\u27bf\u3200-\u32ff\u3300-\u33ff]"
)
chinese_df = data_df[data_df['content'].apply(lambda x: cjk_pattern.search(x) is not None)]
```
3. Filter out videos with Pingyin's
```python
pinyin_pattern = re.compile(
r'[üÜāáǎàōóǒòēéěèīíǐìūúǔùǖǘǚǜ]'
)
chinese_pin_df = chinese_df[chinese_df['content'].apply(lambda x: pinyin_pattern.search(x) is None)]
```
4. Retain videos with latin scripts
```python
az_pattern = re.compile(
r"[a-zA-Z]+"
)
mixed_df = chinese_pin_df[chinese_pin_df['content'].apply(lambda x: az_pattern.search(x) is not None)]
```
5. Retain videos with punctuations
```python
punc_pattern = re.compile(
r'[!?。,、·.,?!]'
)
mixed_punc_df = mixed_df[mixed_df['content'].apply(lambda x: punc_pattern.search(x) is not None)]
```
6. Sort by increasing proportion of chinese characters
```python
def func(x):
return x.apply(lambda z: len(cjk_pattern.findall(z)) / len(z))
mixed_punc_df = mixed_punc_df.sort_values(by='content', key=func)
```
> This gives around 1000 videos left.
7. Save to csv to for manual inspection
```python
mixed_punc_df.to_csv('sanity.csv')
```
8. Manually inspect 0-500
- NwRTR8mY-7A: mostly english
- ASL3yEYC1IE, etc.: contains English translation for each line
- Recurring creators whose content is not good code-switching: "天天開心","日向蓝子","笑花兒","关于麻将的职人","大濕:","朋友sisi","please my hero","金玲老師"
- Manually pick exceptions to previous rule to add to accepted list
- Recurring creators whose content is good code-switching: "我是小夫","久德電子","GL_TECH"
- Most videos about: "U.S. stock market", "tech reviews" are accepted.
9. Quickly skim through 501-1000 (only 10 were picked)
> A total of 176 videos were picked in step 8 & 9
10. Extract selected video clips' audio
```python
from tqdm.auto import tqdm
from pathlib import path
import tarfile
with open("codeswitch.txt", "r") as f: # list of 176 picked video_ids
codeswitch = set(map(str.strip, f.readlines()))
code_switch_data = data_df[data_df['name'].apply(lambda x: x in codeswitch)]
shard_names = {}
for name, shard in zip(
code_switch_data['name'].tolist(),
code_switch_data['shard'].tolist()
):
if shard not in shard_names:
shard_names[shard] = set()
shard_names[shard].add(name)
def extract_wav_files(shard, output_dir):
# Create the output directory if it doesn't exist
tar_file_path = f"yodas/data/zh000/audio/{shard}.tar.gz"
names = shard_names[shard]
# Open the tar.gz file
with tarfile.open(tar_file_path, 'r:gz') as tar:
# Iterate through the contents of the tar file
for member in tar.getmembers():
# Check if the member is a WAV file
video_id = re.search(r"(.{11})-\d{5}-\d{8}-\d{8}", member.name)
if video_id and video_id.group(1) in names:
# Extract the WAV file contents into the output directory
output_path = Path(output_dir, Path(member.name).name)
with open(output_path, 'wb') as output_file:
output_file.write(tar.extractfile(member).read())
output_dir = "./code_switch_yodas"
Path(output_dir).mkdir(exist_ok=True, parents=True)
for shard in tqdm(shard_names):
extract_wav_files(shard, output_dir)
```
11. Publish the subset
```python
import datasets
from datasets import Dataset
audio_dataset = Dataset.from_dict({
"audio": [
f"{output_dir}/{clip_id}.wav"
for clip_id in clip_ids
],
"text": texts,
"id": clip_ids,
"session_id": [x[:11] for x in clip_ids]
})
audio_dataset = audio_dataset.cast_column("audio", datasets.features.Audio(sampling_rate=16000))
audio_dataset = audio_dataset.sort("id")
audio_dataset.push_to_hub(
"georgechang8/code_switch_yodas_zh",
commit_message="Initial commit",
embed_external_files=True
)
```
#### Data Cleaning
1. The video `Pew9CK74axu` is manually cleaned
```python
def filter_fn(batch):
return (z == 'Pew9CK74axu' for z in batch['session_id'])
special_care = audio_dataset.filter(filter_fn, num_proc=8, batched=True)
with open("manual_edit.txt", "w", encoding="utf8") as f:
for l in special_care['text']:
f.write(l + "\n")
# manual cleaning ...
with open("manual_edit_finish.txt", "r", encoding="utf8") as f:
lines = list(map(str.strip, f.readlines()))
replace_dict = {
a: b
for a, b in zip(special_care['id'], lines)
}
def manual_edit(batch):
texts = []
for sid, orig in zip(batch['id'], batch['text']):
texts += [replace_dict.get(sid, orig)]
return {'text': texts}
audio_dataset_manual = audio_dataset.map(manual_edit, batched=True, num_proc=8)
```
2. General cleansing pipeline
```python
import re
import html
def remove_emojies(text):
# Ref: https://gist.github.com/Alex-Just/e86110836f3f93fe7932290526529cd1#gistcomment-3208085
# Ref: https://en.wikipedia.org/wiki/Unicode_block
EMOJI_PATTERN = re.compile(
"["
"\U0001F1E0-\U0001F1FF" # flags (iOS)
"\U0001F300-\U0001F5FF" # symbols & pictographs
"\U0001F600-\U0001F64F" # emoticons
"\U0001F680-\U0001F6FF" # transport & map symbols
"\U0001F700-\U0001F77F" # alchemical symbols
"\U0001F780-\U0001F7FF" # Geometric Shapes Extended
"\U0001F800-\U0001F8FF" # Supplemental Arrows-C
"\U0001F900-\U0001F9FF" # Supplemental Symbols and Pictographs
"\U0001FA00-\U0001FA6F" # Chess Symbols
"\U0001FA70-\U0001FAFF" # Symbols and Pictographs Extended-A
"\U00002702-\U000027B0" # Dingbats
"]"
)
text = re.sub(EMOJI_PATTERN, r' ', text)
return text
def clean_transcripts(x):
cjk = "[\u3400-\u4db5\u4e00-\u9fa5\u9fa6-\u9fbb\uf900-\ufa2d\ufa30-\ufa6a\ufa70-\ufad9\uff00-\uffef\u2e80-\u2eff\u3000-\u303f\u31c0-\u31ef\u2f00-\u2fdf\u2ff0-\u2fff\u3100-\u312f\u31a0-\u31bf\ufe10-\ufe1f\ufe30-\ufe4f\u2600-\u26ff\u2700-\u27bf\u3200-\u32ff\u3300-\u33ff]"
x = html.unescape(x)
x = remove_emojies(x)
dots = '\.{3,}'
x = re.sub(rf'{dots}|…|\s|^|$', ' ', x) # expanding space allows matching " uh uh" case
x = re.sub(rf"({cjk}|\s)([Uu][mh]|U[MH])({cjk}|\s)", r"\1 \3", x) # uh/um surrounded by cjk or space
x = re.sub(r"([HhEe]mm+|[HE]MM+)", " ", x) # hmm emm
x = re.sub(fr"\*+({cjk}+|[A-Za-z]+)\*+", " ", x) # *叹气*
x = re.sub(r'[呃嗯]', ' ', x) # 呃嗯
def replace_except(pattern, repl, z, excs):
for e, t in excs:
z = z.replace(e, t)
z = re.sub(pattern, repl, z)
for e, t in excs:
z = z.replace(t, e)
return z
# remove 恩 except for 恩桥 感恩 恩怨
x = replace_except("恩", ' ', x, excs=[("感恩", "呃"),("恩桥", "嗯"),("恩怨", "emm")])
# remove (...) except for 'Program Files (x86)'
x = re.sub(r'([^()]*)', ' ', x)
x = re.sub(r"\s+", " ", x)
x = replace_except(r'\([^()]*\)', ' ', x, excs=[("Program Files (x86)", "呃")])
puncs = r'[,?!。;?!,;~~]'
x = re.sub(rf'({puncs})(?:\s*\1)+', r'\1', x) # ??? -> ?
x = re.sub(rf"\s+({puncs})", r'\1', x) # text , -> text,
sp_puncs = r'[?!,;]' # puncs with spaces
x = re.sub(rf"({puncs}*{sp_puncs})([a-zA-Z])", r'\1 \2', x) # text,cont -> text, cont
x = re.sub(rf"^[\s]*{puncs}+", "", x) # leading puncs
x = re.sub(r"\s+", " ", x) # excess spaces
return x.strip()
audio_dataset_manual_clean = audio_dataset_manual.map(lambda x: {"text": list(map(clean_transcripts, x['text']))}, batched=True, num_proc=8)
# push to hub
audio_dataset_manual_clean.push_to_hub(
"georgechang8/code_switch_yodas_zh",
config_name="clean",
set_default=False,
commit_message="Clean transcript",
max_shard_size="1GB",
embed_external_files=True,
)
```
## Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
1. The filtering & hand-picking process might left out useful videos.
2. The transcriptions is not processed in any way, so might need further cleansing.
## Dataset Card Contact
Original dataset: https://huggingface.co/datasets/espnet/yodas
CS processing: Chih-Chiang Chang (cc.chang0828@gmail.com)