ASCEND_CLEAN / README.md
georgechang8's picture
Fix bug in replacing UNK.
5c31aa4 verified
|
raw
history blame
No virus
7.23 kB
metadata
language:
  - en
  - zh
license: cc-by-sa-4.0
size_categories:
  - 10K<n<100K
dataset_info:
  - config_name: 30s
    features:
      - name: id
        dtype: string
      - name: audio
        dtype:
          audio:
            sampling_rate: 16000
      - name: transcription
        dtype: string
      - name: condition_on_prev
        dtype: string
    splits:
      - name: train
        num_bytes: 1117823922.576
        num_examples: 1308
      - name: validation
        num_bytes: 115050795
        num_examples: 135
      - name: test
        num_bytes: 117961106
        num_examples: 138
    download_size: 1207395166
    dataset_size: 1350835823.576
  - config_name: default
    features:
      - name: id
        dtype: string
      - name: path
        dtype: string
      - name: audio
        dtype:
          audio:
            sampling_rate: 16000
      - name: transcription
        dtype: string
      - name: duration
        dtype: float32
      - name: language
        dtype: string
      - name: original_speaker_id
        dtype: int64
      - name: session_id
        dtype: int64
      - name: topic
        dtype: string
    splits:
      - name: train
        num_bytes: 1014558975.36
        num_examples: 9869
      - name: test
        num_bytes: 106170264.135
        num_examples: 1315
      - name: validation
        num_bytes: 106771606.91
        num_examples: 1130
    download_size: 1223500329
    dataset_size: 1227500846.4050002
configs:
  - config_name: 30s
    data_files:
      - split: train
        path: 30s/train-*
      - split: validation
        path: 30s/validation-*
      - split: test
        path: 30s/test-*
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: test
        path: data/test-*
      - split: validation
        path: data/validation-*

Dataset Card for Dataset Name

This dataset is derived from CAiRE/ASCEND. More information is available at https://huggingface.co/datasets/CAiRE/ASCEND.

  • Removed 嗯 呃 um uh
  • Resolved [UNK]'s using whisper-medium

Dataset Details

Dataset Description

  • Language(s): English, Simplified Chinese, Mixed
  • License: Creative Common Attribution Share-Alike 4.0 International (CC-BY-SA 4.0)

Dataset Creation

Source Data

https://huggingface.co/datasets/CAiRE/ASCEND

Data Collection and Processing

  1. Load from source
from datasets import load_dataset, Audio as DSAudio
data_raw = load_dataset("CAiRE/ASCEND")
data_raw = data_raw.cast_column("audio", DSAudio(sampling_rate=16000))
  1. Clean stop words
import re

def clean_transcripts(x):
    cjk = "[\u3400-\u4db5\u4e00-\u9fa5\u9fa6-\u9fbb\uf900-\ufa2d\ufa30-\ufa6a\ufa70-\ufad9\uff00-\uffef\u2e80-\u2eff\u3000-\u303f\u31c0-\u31ef\u2f00-\u2fdf\u2ff0-\u2fff\u3100-\u312f\u31a0-\u31bf\ufe10-\ufe1f\ufe30-\ufe4f\u2600-\u26ff\u2700-\u27bf\u3200-\u32ff\u3300-\u33ff]"
    x = re.sub(r'\.\.\.|\s|^|$', '  ', x)   # expanding space allows matching " uh uh" case
    x = re.sub(rf"({cjk}|\s)([Uu][mh]|U[MH])({cjk}|\s)", r"\1 \3", x) # replace any uh surrounded by cjk or space
    x = x.replace('嗯', ' ')
    x = x.replace('呃', ' ')
    x = re.sub(r"\s+", " ", x)
    return x.strip()

data = data_raw.map(lambda x: {"transcription": clean_transcripts(x['transcription'])})
data = data.filter(lambda x: x["transcription"] != "")
  1. Isolate samples with UNKs
unks = data.filter(lambda x: "[UNK]" in x["transcription"])
unks.shape

{'train': (402, 9), 'test': (36, 9), 'validation': (63, 9)}

  1. Load whisper model. For Chinese, medium performs best.
from stable_whisper import load_faster_whisper
model = load_faster_whisper(
    "medium",
    device="cuda",
    compute_type="float16",
)
  1. Resolve UNKs with whisper-medium
from sacrebleu.tokenizers.tokenizer_zh import TokenizerZh
from whisper_normalizer.basic import BasicTextNormalizer
import cn2an
import json
import jiwer
from tqdm.auto import tqdm

sacretok = TokenizerZh()
whisper_norm = BasicTextNormalizer()
def compute_mer(hyp, ref):
    def norm(x):
        return sacretok(cn2an.transform(whisper_norm(x), "an2cn"))
    return jiwer.process_words(norm(hyp), norm(ref)).wer * 100

adjusted = {split:dict() for split in data}
double_check = {split:dict() for split in data}

UNK = "[UNK]"

for split in data:
    trange = tqdm(unks[split], desc=split)
    for i,sample in enumerate(trange):
        transcription = sample['transcription']
        texts = transcription.split(UNK)
        words = []
        for sent in texts[1:]:
            for w in sacretok(sent).split():
                if w not in words:
                    words += [w]
        keyword = "关键词"
        header = "字幕"
        prompt = f"{keyword} \"{'/'.join(words)}\" {header} "
        result = model.transcribe_stable(
            audio=sample['audio']['array'],
            initial_prompt=prompt,  # encourage reuse of words
            prefix=texts[0],        # forcing start to follow real start
            language=sample['language'].replace('mixed', 'zh'),
            regroup=False,
            verbose=None,
            no_speech_threshold=1.0,
            suppress_silence=False,
            word_timestamps=True  # though unused, timestamps reduce hallucination
        ).merge_all_segments()
        adjustment = clean_transcripts(
            result.text
            .replace(keyword, " ")
            .replace(header, " ")
        )
        mer=compute_mer(transcription, adjustment)
        adjusted[split][sample['id']] = adjustment
        trange.set_postfix(mer=f"{mer:.2f}", dc=len(double_check[split]))
        if mer > 30:
            double_check[split][sample['id']] = mer
            print(transcription, "||", adjustment)
        if i % 5 == 0 or i == len(unks[split]) - 1:
            with open(f"checkpoint_{split}.json", "w") as f:
                json.dump(adjusted[split], f)
  1. Replace UNK utterances with resolved ones
import json

adjusted_transcripts = {}
for split in data_raw:
    with open(f"checkpoint_{split}.json", "r", encoding="utf8") as f:
        adjusted_transcripts[split] = json.load(f)

UNK = "[UNK]"

def fix_unk(sample):
    def bad(orig, new):
        return sacretok(new) in sacretok(orig)

    transcription = clean_transcripts(sample['transcription'].replace(UNK, ""))
    sid = sample['id']
    adjustment = adjusted_transcripts[split].get(sid, transcription)
    if bad(transcription, adjustment):
        # adjustment worse than just removing UNK
        # print("skipped:", transcription, "||", adjustment)
        adjustment = transcription
    return adjustment

data = data_raw.map(lambda x: {"transcription": fix_unk(x)})
data = data.filter(lambda x: x["transcription"] != "")
data = data.sort(["session_id","id"])

for split in data:
    for line in data[split]['transcription']:
        assert UNK not in line

train adjusted 402 samples, 75 of which just removes UNKs.

test adjusted 36 samples, 9 of which just removes UNKs.

validation adjusted 63 samples, 7 of which just removes UNKs.