metadata
language:
- id
dataset_info:
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
- name: word_count
dtype: int64
splits:
- name: unfiltered
num_bytes: 361624744
num_examples: 157259
- name: filtered
num_bytes: 145352179.80208445
num_examples: 63209
download_size: 267272223
dataset_size: 506976923.80208445
configs:
- config_name: default
data_files:
- split: unfiltered
path: data/unfiltered-*
- split: filtered
path: data/filtered-*
Literally the title. It is processed from wikimedia/wikipedia. Only entries that contain the case-insensitive string 'Indonesia' is included. Now, you can use this to train a model on information about Indonesia written in Indonesian.
Additional filtering was done to remove entries that are either too short or too long.
Split | Total Word Count | Mean Word Count | Standard Deviation of Word Count | Median Word Count | Dataset Length | Min Word Count | Max Word Count |
---|---|---|---|---|---|---|---|
Unfiltered | 47,039,325 | 299.12 | 717.74 | 98 | 157,259 | 3 | 95,967 |
Filtered | 18,104,407 | 286.42 | 160.35 | 239 | 63,209 | 98 | 717 |