Chennzi commited on
Commit
502ea1d
·
verified ·
1 Parent(s): 99a481e

Upload Multimodal-Mind2Web batch 1

Browse files
README.md ADDED
@@ -0,0 +1,116 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ tags:
4
+ - cua-lite
5
+ - gui
6
+ - sft
7
+ task_categories:
8
+ - image-text-to-text
9
+ configs:
10
+ - config_name: default
11
+ data_files:
12
+ - split: train
13
+ path:
14
+ - "*/*/train.parquet"
15
+ - "*/*/train/**.parquet"
16
+ - split: validation
17
+ path:
18
+ - "*/*/validation.parquet"
19
+ - "*/*/validation/**.parquet"
20
+ - config_name: web-trajectory
21
+ data_files:
22
+ - split: train
23
+ path:
24
+ - "web/trajectory/train.parquet"
25
+ - "web/trajectory/train/**.parquet"
26
+ - split: validation
27
+ path:
28
+ - "web/trajectory/validation.parquet"
29
+ - "web/trajectory/validation/**.parquet"
30
+ ---
31
+
32
+ # cua-lite/Multimodal-Mind2Web
33
+
34
+ cua-lite preprocessed version of Multimodal-Mind2Web (osunlp/Multimodal-Mind2Web). Web trajectory data with upstream's canonical four-way split: train + three held-out test sets (test_task, test_website, test_domain) capturing successively harder generalization. The upstream split labels are preserved via metadata.others.split and routed into our validation split; the hash splitter never activates here.
35
+
36
+ ## Origin
37
+
38
+ - [https://huggingface.co/datasets/osunlp/Multimodal-Mind2Web](https://huggingface.co/datasets/osunlp/Multimodal-Mind2Web)
39
+
40
+ ## Load via `datasets`
41
+
42
+ ```python
43
+ from datasets import load_dataset
44
+
45
+ # entire dataset
46
+ ds = load_dataset("cua-lite/Multimodal-Mind2Web")
47
+
48
+ # just one (platform, task_type) cohort
49
+ ds = load_dataset("cua-lite/Multimodal-Mind2Web", "web-trajectory")
50
+ ```
51
+
52
+ You can also filter by `metadata.platform` / `metadata.task_type` /
53
+ `metadata.others.*` after loading; every row carries a rich `metadata`
54
+ struct (see schema below).
55
+
56
+ ## Schema
57
+
58
+ Each row has these columns:
59
+
60
+ | column | type | notes |
61
+ |---|---|---|
62
+ | `image_ids` | list[string] | content-addressed ids (`<sha256>.<ext>`), enables cross-parquet / cross-dataset dedup |
63
+ | `images` | list[Image] | bytes embedded at HF push time; matches `image_ids` index-for-index |
64
+ | `messages` | list[struct] | OpenAI-style turns with `role` + structured `content` |
65
+ | `metadata` | struct | `{platform, task_type, split, others{...}}` |
66
+
67
+ Coordinate values in `messages` are normalized to `[0, 1000]` integers.
68
+
69
+ ## Layout
70
+
71
+ ```
72
+ <platform>/<task_type>/<split>.parquet # single-variant cohort
73
+ <platform>/<task_type>/<split>/<variant>.parquet # multi-variant cohort
74
+ <platform>/<task_type>/<split>/shard-NNNNN-of-NNNNN.parquet # + sharded single-variant
75
+ <platform>/<task_type>/<split>/<variant>/shard-NNNNN-of-NNNNN.parquet # + sharded multi-variant
76
+ ```
77
+
78
+ - `platform` ∈ {desktop, mobile, web}
79
+ - `task_type` directory uses a hyphen where the metadata value uses a colon: `grounding-action/` → `grounding:action`
80
+ - `split` ∈ {train, validation} — `validation` is an in-distribution held-out slice (never used in training); `test` is reserved for out-of-distribution benchmark datasets
81
+
82
+ ## Stats
83
+
84
+ | platform | task_type | variant | train | validation |
85
+ |---|---|---|---:|---:|
86
+ | web | trajectory | test_domain | 0 | 478 |
87
+ | web | trajectory | test_task | 0 | 99 |
88
+ | web | trajectory | test_website | 0 | 83 |
89
+ | web | trajectory | train | 602 | 0 |
90
+
91
+ ## Image storage
92
+
93
+ Images are content-addressed by SHA-256 and deduplicated within this repo.
94
+ The `images` column on HuggingFace embeds raw bytes so the Hub viewer
95
+ renders thumbnails and `datasets.load_dataset` works out of the box.
96
+
97
+ For local workflows (SFT export, cross-dataset dedup, split rebalancing),
98
+ run [`reverse.py`](https://github.com/cua-lite/cua-lite/tree/main/scripts/hf_upload)
99
+ on a cloned repo: it extracts each unique `image_id` once to a shared
100
+ `image_store/<hash[:2]>/<hash>.<ext>` and rewrites the parquets to drop
101
+ the `images` column, so rows reference images by hash id only. The shared
102
+ store is reusable across datasets — the same image in two repos lands in
103
+ one file.
104
+
105
+ - Total unique images: **7,423**
106
+ - Store size: **7.09 GB**
107
+
108
+ ## Notes
109
+
110
+ All three test splits are currently folded into our ``validation`` split. A future revision may promote test_website / test_domain to our canonical ``test`` split (out-of-distribution benchmark).
111
+
112
+ ## License & citation
113
+
114
+ See original dataset (osunlp/Multimodal-Mind2Web)
115
+
116
+ See https://huggingface.co/datasets/osunlp/Multimodal-Mind2Web
stats.json ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "rows_in": 1262,
3
+ "rows_out": 1262,
4
+ "rows_dropped": 0,
5
+ "unique_images": 7423,
6
+ "image_store_bytes": 7091462842,
7
+ "by_partition": {
8
+ "web::trajectory::train::train": 602,
9
+ "web::trajectory::validation::test_domain": 478,
10
+ "web::trajectory::validation::test_task": 99,
11
+ "web::trajectory::validation::test_website": 83
12
+ }
13
+ }
web/trajectory/train/train/shard-00000-of-00003.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a04ac4a4804427941ef2fe96477bd979b9c5ff24de6e5e3cddf553011a7dde18
3
+ size 1403092368
web/trajectory/train/train/shard-00001-of-00003.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3a2de2e53a575032081452bdf500889eb7a1157da1e767894f19e39f1aad629a
3
+ size 1385422774
web/trajectory/train/train/shard-00002-of-00003.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0dc9c8a96ffd01b1d4f563ecde09dd990c5a1da14a7d25c0cd763ca7955c0648
3
+ size 1024802793
web/trajectory/validation/test_domain/shard-00000-of-00002.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:27c40a8820ec0d115543a932e08cb2eebfe8e409221da8e7389782cd00a6b971
3
+ size 1374617270
web/trajectory/validation/test_domain/shard-00001-of-00002.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:eba184198bdde947a8aefd2ea78dc8fb14d47ff9167efe940c5a87d8430134a1
3
+ size 1011364848
web/trajectory/validation/test_task.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:693efff5e5359a289f434dac1000f836a0d0c9c3fd3367e6c770a64a7ef003a4
3
+ size 553181430
web/trajectory/validation/test_website.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c11c5b4930f832a3c48da477d910c08c24925361bba41d7fa5e47135dbd10d44
3
+ size 562939735