--- language: - vi dataset_info: features: - name: text dtype: string - name: id dtype: string - name: domain dtype: string splits: - name: train num_bytes: 65506190827 num_examples: 12169131 download_size: 34648619492 dataset_size: 65506190827 configs: - config_name: default data_files: - split: train path: data/train-* --- ### Dataset Description Vietnamese Curated Text Dataset. This dataset is collected from multiple open Vietnamese datasets, and curated with [NeMo Curator](https://github.com/NVIDIA/NeMo-Curator) - **Developed by:** Viettel Solution - **Language:** Vietnamese ### Details #### Data Collection We utilize a combination of datasets that contain samples in Vietnamese language, ensuring a robust and representative text corpus. These datasets include: - The Vietnamese subset of the [C4 dataset](https://huggingface.co/datasets/allenai/c4/viewer/vi) . - The Vietnamese subset of the [OSCAR dataset, version 23.01](https://huggingface.co/datasets/oscar-corpus/OSCAR-2301/tree/main/vi_meta). - [Wikipedia's Vietnamese articles](https://huggingface.co/datasets/wikimedia/wikipedia/viewer/20231101.vi). - [Binhvq's Vietnamese news corpus](https://huggingface.co/datasets/jetaudio/binhvq_news). #### Preprocessing We use [NeMo Curator](https://github.com/NVIDIA/NeMo-Curator) to curate the collected data. The data curation pipeline includes these key steps: 1. Unicode Reformatting: Texts are standardized into a consistent Unicode format to avoid encoding issues. 2. Exact Deduplication: Removes exact duplicates to reduce redundancy. 3. Quality Filtering: 4. Heuristic Filtering: Applies rules-based filters to remove low-quality content. 5. Classifier-Based Filtering: Uses machine learning to classify and filter documents based on quality. #### Dataset Statistics **Content diversity** Domain proportion in curated dataset **Character based metrics** Box plots of percentage of symbols, numbers, and whitespace characters compared to the total characters, word counts and average word lengths **Token count distribution** Distribution of document sizes (in terms of token count) **Embedding visualization** UMAP visualization of 5% of the dataset *UMAP visualization of 5% of the dataset*