--- language: - vi license: apache-2.0 tags: - vietnamese - text - corpus size_categories: - 10M1.3M:,} - News articles: {>13M:,} - Text documents: {>200K:,} ## Processing Details - Processed using Apache Spark - Minimum document length: {10} characters - Text cleaning applied: - HTML/special character removal - Whitespace normalization - URL removal - Empty document filtering ## Data Format Each document has: - 'text': The document content - 'source': Origin of the document (wikipedia/news/text) ## Usage Example ``` from datasets import load_dataset # Load full dataset dataset = load_dataset("{username}/{dataset_name}") # Stream dataset (memory efficient) dataset = load_dataset( "{username}/{dataset_name}", streaming=True ) # Filter by source wiki_docs = dataset.filter(lambda x: x["source"] == "wikipedia") ``` ## Updates Released: 2024-12-17