Datasets:

Size Categories:
n>1T
ArXiv:
Tags:
code
License:
LZpenguin commited on
Commit
03e532e
1 Parent(s): 27d7a22

Upload dataset (part 00001-of-00002)

Browse files
README.md CHANGED
@@ -1,5 +1,10 @@
1
  ---
2
  license: mit
 
 
 
 
 
3
  dataset_info:
4
  features:
5
  - name: image
@@ -10,16 +15,16 @@ dataset_info:
10
  dtype: string
11
  splits:
12
  - name: train
13
- num_bytes: 4189624847.8344607
14
- num_examples: 19514
15
  - name: test
16
- num_bytes: 523649431.37584555
17
- num_examples: 2439
18
  - name: val
19
- num_bytes: 523864129.7896938
20
- num_examples: 2440
21
- download_size: 4642434734
22
- dataset_size: 5237138409
23
  configs:
24
  - config_name: default
25
  data_files:
@@ -29,13 +34,8 @@ configs:
29
  path: data/test-*
30
  - split: val
31
  path: data/val-*
32
- task_categories:
33
- - image-to-text
34
  tags:
35
  - code
36
- pretty_name: vision2ui
37
- size_categories:
38
- - 100M<n<1B
39
  ---
40
  VISION2UI: A Real-World Dataset with Layout for Code Generation from UI Designs
41
  > Automatically generating UI code from webpage design visions can significantly alleviate the burden of developers, enabling beginner developers or designers to directly generate Web pages from design diagrams. Currently, prior research has accomplished the objective of generating UI code from rudimentary design visions or sketches through designing deep neural networks. Inspired by the groundbreaking advancements achieved by Multimodal Large Language Models (MLLMs), the automatic generation of UI code from high-fidelity design images is now emerging as a viable possibility. Nevertheless, our investigation reveals that existing MLLMs are hampered by the scarcity of authentic, high-quality, and large-scale datasets, leading to unsatisfactory performance in automated UI code generation. To mitigate this gap, we present a novel dataset, termed VISION2UI, extracted from real-world scenarios, augmented with comprehensive layout information, tailored specifically for finetuning MLLMs in UI code generation. Specifically, this dataset is derived through a series of operations, encompassing collecting, cleaning, and filtering of the open-source Common Crawl dataset. In order to uphold its quality, a neural scorer trained on labeled samples is utilized to refine the data, retaining higher-quality instances. Ultimately, this process yields a dataset comprising 2,000 (Much more is coming soon) parallel samples encompassing design visions and UI code.
 
1
  ---
2
  license: mit
3
+ size_categories:
4
+ - 100M<n<1B
5
+ task_categories:
6
+ - image-to-text
7
+ pretty_name: vision2ui
8
  dataset_info:
9
  features:
10
  - name: image
 
15
  dtype: string
16
  splits:
17
  - name: train
18
+ num_bytes: 21648361007.165703
19
+ num_examples: 67847
20
  - name: test
21
+ num_bytes: 2706085010.417149
22
+ num_examples: 8481
23
  - name: val
24
+ num_bytes: 2706085010.417149
25
+ num_examples: 8481
26
+ download_size: 24385961099
27
+ dataset_size: 27060531028.0
28
  configs:
29
  - config_name: default
30
  data_files:
 
34
  path: data/test-*
35
  - split: val
36
  path: data/val-*
 
 
37
  tags:
38
  - code
 
 
 
39
  ---
40
  VISION2UI: A Real-World Dataset with Layout for Code Generation from UI Designs
41
  > Automatically generating UI code from webpage design visions can significantly alleviate the burden of developers, enabling beginner developers or designers to directly generate Web pages from design diagrams. Currently, prior research has accomplished the objective of generating UI code from rudimentary design visions or sketches through designing deep neural networks. Inspired by the groundbreaking advancements achieved by Multimodal Large Language Models (MLLMs), the automatic generation of UI code from high-fidelity design images is now emerging as a viable possibility. Nevertheless, our investigation reveals that existing MLLMs are hampered by the scarcity of authentic, high-quality, and large-scale datasets, leading to unsatisfactory performance in automated UI code generation. To mitigate this gap, we present a novel dataset, termed VISION2UI, extracted from real-world scenarios, augmented with comprehensive layout information, tailored specifically for finetuning MLLMs in UI code generation. Specifically, this dataset is derived through a series of operations, encompassing collecting, cleaning, and filtering of the open-source Common Crawl dataset. In order to uphold its quality, a neural scorer trained on labeled samples is utilized to refine the data, retaining higher-quality instances. Ultimately, this process yields a dataset comprising 2,000 (Much more is coming soon) parallel samples encompassing design visions and UI code.
data/val-00000-of-00006.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:399e652737e44e2da50d5e41ed076bad5424c47ecd42199d836cb084c585bfbe
3
+ size 416145696
data/val-00001-of-00006.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a8e47e536a525a1df6ea9f8f8b00c271ece83c3ab99bade9c9cb26f96a6744d8
3
+ size 410291639
data/val-00002-of-00006.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:995187fcea9846d963f0b6d4bc02fba83ff4210f29df082bb66eabcae3f97c97
3
+ size 404304789
data/val-00003-of-00006.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:240491c58a5d23253e6b1d20174f9b9b9105870ba1eca8200cbf23e219ad2c91
3
+ size 415726791
data/val-00004-of-00006.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6505526e6b20dce3ec3cb75854c38c7275c922979cf2466b304e4ad90285a934
3
+ size 413453132
data/val-00005-of-00006.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a774aac1f983920aa56dae48dd28995f1a6e4ca4c35b47e1fd06355ad916ca99
3
+ size 406630345