Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -14,4 +14,27 @@ configs:
|
|
14 |
data_files:
|
15 |
- split: train
|
16 |
path: data/train-*
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
17 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
14 |
data_files:
|
15 |
- split: train
|
16 |
path: data/train-*
|
17 |
+
license: cc0-1.0
|
18 |
+
task_categories:
|
19 |
+
- text-classification
|
20 |
+
pretty_name: WebUI tokens (unlabelled)
|
21 |
+
size_categories:
|
22 |
+
- 1M<n<10M
|
23 |
+
source_datasets:
|
24 |
+
- gbenson/webui-dom-snapshots
|
25 |
---
|
26 |
+
# Dataset Card for WebUI tokens (unlabelled)
|
27 |
+
|
28 |
+
Every token over 4 characters long from [`gbenson/webui-dom-snapshots`](https://huggingface.co/datasets/gbenson/webui-dom-snapshots).
|
29 |
+
|
30 |
+
- **Curated by:** [Gary Benson](https://gbenson.net/)
|
31 |
+
- **License:** [CC0 1.0 Universal](https://creativecommons.org/publicdomain/zero/1.0/)
|
32 |
+
|
33 |
+
## Uses
|
34 |
+
|
35 |
+
I'm using it to develop a [DOM-aware tokenizer](https://github.com/gbenson/dom-tokenizers) for HTML.
|
36 |
+
|
37 |
+
## Bias, Risks, and Limitations
|
38 |
+
|
39 |
+
- 87% of the source dataset was English language websites, with no other language exceeding 2% of the total
|
40 |
+
- Tokens were coerced to ASCII using [Unidecode](https://pypi.org/project/Unidecode/)
|