ysdede's picture
Update README.md
9d8a025 verified
metadata
dataset_info:
  features:
    - name: audio
      dtype:
        audio:
          sampling_rate: 16000
    - name: transcription
      dtype: string
    - name: duration
      dtype: float32
    - name: up_votes
      dtype: int32
    - name: down_votes
      dtype: int32
    - name: age
      dtype: string
    - name: gender
      dtype: string
    - name: accent
      dtype: string
  splits:
    - name: train
      num_bytes: 249774324
      num_examples: 26501
    - name: test
      num_bytes: 90296575
      num_examples: 9650
    - name: validation
      num_bytes: 78834938
      num_examples: 8639
    - name: validated
      num_bytes: 412113612
      num_examples: 46345
  download_size: 818561949
  dataset_size: 831019449
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: test
        path: data/test-*
      - split: validation
        path: data/validation-*
      - split: validated
        path: data/validated-*
license: cc0-1.0
task_categories:
  - automatic-speech-recognition
language:
  - tr

Improving CommonVoice 17 Turkish Dataset

I recently worked on enhancing the Mozilla CommonVoice 17 Turkish dataset to create a higher quality training set for speech recognition models.
Here's an overview of my process and findings.

Initial Analysis and Split Organization

My first step was analyzing the dataset organization to understand its structure.
Through analysis of filename stems as unique keys, I revealed and documented an important aspect of CommonVoice's design that might not be immediately clear to all users:

  • The validated set (113,699 total files) completely contained all samples from:
    • Train split (35,035 files)
    • Test split (11,290 files)
    • Validation split (11,247 files)
  • Additionally, the validated set had ~56K unique samples not present in any other split

This design follows CommonVoice's documentation, where dev/test/train are carefully reviewed subsets of the validated data.
However, this structure needs to be clearly understood to avoid potential data leakage when working with the dataset.
For example, using the validated set for training while evaluating on the test split would be problematic since the test data is already included in the validated set.

To create a clean dataset without overlaps, I:

  1. Identified all overlapping samples using filename stems as unique keys
  2. Removed samples that were already in train/test/validation splits from the validated set
  3. Created a clean, non-overlapping validated split with unique samples only

This approach ensures that researchers can either:

  • Use the original train/test/dev splits as curated by CommonVoice, OR
  • Use my cleaned validated set with their own custom splits

Both approaches are valid, but mixing them could lead to evaluation issues.

Audio Processing and Quality Improvements

Audio Resampling

All audio files were resampled to 16 kHz to:

  • Make the dataset directly compatible with Whisper and similar models
  • Eliminate the need for runtime resampling during training
  • Ensure consistent audio quality across the dataset

Silence Trimming

I processed all audio files to remove unnecessary silence and noise:

  • Used Silero VAD with a threshold of 0.6 to detect speech segments
  • Trimmed leading and trailing silences
  • Removed microphone noise and clicks at clip boundaries

Duration Filtering and Analysis

I analyzed each split separately after trimming silences. Here are the detailed findings per split:

Split Files Before Files After Short Files Duration Before (hrs) Duration After (hrs) Duration Reduction % Short Files Duration (hrs) Files Reduction %
Train 11,290 9,651 1,626 13.01 7.34 43.6% 0.37 14.5%
Validation 11,247 8,640 2,609 11.17 6.27 43.9% 0.60 23.2%
Test 35,035 26,501 8,633 35.49 19.84 44.1% 2.00 24.4%
Validated 56,127 46,348 9,991 56.71 32.69 42.4% 2.29 17.4%
Total 113,699 91,140 22,859 116.38 66.14 43.2% 5.26 19.8%

Note: Files with duration shorter than 1.0 seconds were removed from the dataset.

Validation Split Analysis (formerly Eval)

  • Original files: 11,247
  • Found 2,609 files shorter than 1.0s
  • Statistics for short files:
    • Total duration: 26.26 minutes
    • Average duration: 0.83 seconds
    • Shortest file: 0.65 seconds
    • Longest file: 0.97 seconds

Train Split Analysis

  • Original files: 35,035
  • Found 8,633 files shorter than 1.0s
  • Statistics for short files:
    • Total duration: 2.29 hours
    • Average duration: 0.82 seconds
    • Shortest file: 0.08 seconds
    • Longest file: 0.97 seconds

Test Split Analysis

  • Original files: 11,290
  • Found 1,626 files shorter than 1.0s
  • Statistics for short files:
    • Total duration: 56.26 minutes
    • Average duration: 0.85 seconds
    • Shortest file: 0.65 seconds
    • Longest file: 0.97 seconds

Validated Split Analysis

  • Original files: 56,127
  • Found 9,991 files shorter than 1.0s
  • Statistics for short files:
    • Total duration: 36.26 minutes
    • Average duration: 0.83 seconds
    • Shortest file: 0.65 seconds
    • Longest file: 0.97 seconds

All short clips were removed from the dataset to ensure consistent quality. The final dataset maintains only clips longer than 1.0 seconds, with average durations between 2.54-2.69 seconds across splits.

Final Split Statistics

The cleaned dataset was organized into:

  • Train: 26,501 files (19.84 hours, avg duration: 2.69s, min: 1.04s, max: 9.58s)
  • Test: 9,650 files (7.33 hours, avg duration: 2.74s, min: 1.08s, max: 9.29s)
  • Validation: 8,639 files (6.27 hours, avg duration: 2.61s, min: 1.04s, max: 9.18s)
  • Validated: 46,345 files (32.69 hours, avg duration: 2.54s, min: 1.04s, max: 9.07s)

Final Dataset Split Metrics

Split Files Duration (hours) Avg Duration (s) Min Duration (s) Max Duration (s)
TRAIN 26501 19.84 2.69 1.04 9.58
TEST 9650 7.33 2.74 1.08 9.29
VALIDATION 8639 6.27 2.61 1.04 9.18
VALIDATED 46345 32.69 2.54 1.04 9.07

Total files processed: 91,135 Valid entries created: 91,135 Files skipped: 0 Total dataset duration: 66.13 hours Average duration across all splits: 2.61 seconds

The dataset was processed in the following order:

  1. Train split (26,501 files)
  2. Test split (9,650 files)
  3. Validation split (8,639 files) - Note: Also known as "eval" split in some CommonVoice versions
  4. Validated split (46,348 files)

Note: The validation split (sometimes referred to as "eval" split in CommonVoice documentation) serves the same purpose - it's a held-out set for model validation during training.
We've standardized the naming to "validation" throughout this documentation for consistency with common machine learning terminology.

One text file in the validated split was flagged for being too short (2 characters), but was still included in the final dataset.

The processed dataset was saved as 'commonvoice_17_tr_fixed'.

Text Processing and Standardization

Character Set Optimization

  • Created a comprehensive charset from all text labels
  • Simplified the character set by:
    • Standardizing quotation marks
    • Removing infrequently used special characters

Text Quality Improvements

  • Generated word frequency metrics to identify potential issues
  • Corrected common Turkish typos and grammar errors
  • Standardized punctuation and spacing

Results

The final dataset shows significant improvements:

  • Removed unnecessary silence and noise from audio
  • Consistent audio durations above 1.0 seconds
  • Standardized text with corrected Turkish grammar and typography
  • Maintained original metadata (age, upvotes, etc.)

These improvements make the dataset more suitable for training speech recognition models while maintaining the diversity and richness of the original CommonVoice collection.

Tools Used

This dataset processing work was completed using ASRTK (Automatic Speech Recognition Toolkit), an open-source Python toolkit designed to streamline the development and enhancement of ASR systems. ASRTK provides utilities for:

  • Audio processing with advanced splitting and resampling capabilities
  • Text normalization and cleaning
  • Forced alignment using Silero VAD models
  • Efficient batch processing with multi-threading support

The toolkit is available under the MIT license and welcomes contributions from the community.