Datasets:
File size: 884 Bytes
0cf2a53 c0bed32 0cf2a53 d78150d 0cf2a53 d78150d 0cf2a53 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 |
---
language:
- vi
license: apache-2.0
tags:
- vietnamese
- text
- corpus
size_categories:
- 10M<n<100M
---
# Vietnamese Combined Corpus
## Dataset Statistics
- Total documents: {<15M:,}
- Wikipedia articles: {>1.3M:,}
- News articles: {>13M:,}
- Text documents: {>200K:,}
## Processing Details
- Processed using Apache Spark
- Minimum document length: {10} characters
- Text cleaning applied:
- HTML/special character removal
- Whitespace normalization
- URL removal
- Empty document filtering
## Data Format
Each document has:
- 'text': The document content
- 'source': Origin of the document (wikipedia/news/text)
## Usage Example
```
from datasets import load_dataset
# Load full dataset
dataset = load_dataset("{username}/{dataset_name}")
# Filter by source
wiki_docs = dataset.filter(lambda x: x["source"] == "wikipedia")
```
## Updates
Released: 2024-12-17 |