metadata
dataset_info:
features:
- name: content
dtype: string
- name: markdown
dtype: string
- name: url
dtype: string
splits:
- name: train
num_bytes: 82642012
num_examples: 2067
download_size: 34862377
dataset_size: 82642012
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
It is what it says on the tin, 2k websites that don't run javascript (aka: probably really old, and simple!). Filtering needs to be done to take out the defunct sites and error messages. I don't expect this is a hard task.