Datasets:
metadata
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 52157654
num_examples: 1313525
download_size: 48108448
dataset_size: 52157654
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: cc0-1.0
task_categories:
- text-classification
pretty_name: WebUI tokens (unlabelled)
size_categories:
- 1M<n<10M
source_datasets:
- gbenson/webui-dom-snapshots
Dataset Card for WebUI tokens (unlabelled)
Every token over 4 characters long from gbenson/webui-dom-snapshots
.
- Curated by: Gary Benson
- License: CC0 1.0 Universal
Uses
I'm using it to develop a DOM-aware tokenizer for HTML.
Bias, Risks, and Limitations
- 87% of the source dataset was English language websites, with no other language exceeding 2% of the total
- Tokens were coerced to ASCII using Unidecode