--- license: apache-2.0 dataset_info: features: - name: width dtype: int64 - name: height dtype: int64 - name: image dtype: image - name: objects struct: - name: bbox sequence: sequence: float64 - name: category sequence: string - name: color list: - name: alpha dtype: float64 - name: blue dtype: float64 - name: green dtype: float64 - name: red dtype: float64 - name: radius sequence: float64 - name: text sequence: string splits: - name: train num_bytes: 1253458059.322 num_examples: 7846 download_size: 1160884066 dataset_size: 1253458059.322 task_categories: - object-detection tags: - ui - design - detection size_categories: - n<1K --- # Dataset: Mobile UI Design Detection ## Introduction This dataset is designed for object detection tasks with a focus on detecting elements in mobile UI designs. The targeted objects include text, images, and groups. The dataset contains images and object detection boxes, including class labels and location information. ## Dataset Content Load the dataset and take a look at an example: ```python >>> from datasets import load_dataset >>>> ds = load_dataset("mrtoy/mobile-ui-design") >>> example = ds[0] >>> example {'width': 375, 'height': 667, 'image': , 'objects': {'bbox': [[0.0, 0.0, 375.0, 667.0], [0.0, 0.0, 375.0, 667.0], [0.0, 0.0, 375.0, 20.0], ... ], 'category': ['text', 'rectangle', 'rectangle', ...]}} ``` The dataset has the following fields: - image: PIL.Image.Image object containing the image. - height: The image height. - width: The image width. - objects: A dictionary containing bounding box metadata for the objects in the image: - bbox: The object’s bounding box (xmin,ymin,width,height). - category: The object’s category, with possible values including rectangle、text、group、image - color: The object’s color, text color or rectangle color, or None - radius: The object’s color, rectangle radius, or None - text: text content, or None You can visualize the bboxes on the image using some internal torch utilities. ```python import torch from torchvision.ops import box_convert from torchvision.utils import draw_bounding_boxes from torchvision.transforms.functional import pil_to_tensor, to_pil_image item = ds[0] boxes_xywh = torch.tensor(item['objects']['bbox']) boxes_xyxy = box_convert(boxes_xywh, 'xywh', 'xyxy') to_pil_image( draw_bounding_boxes( pil_to_tensor(item['image']), boxes_xyxy, labels=item['objects']['category'], ) ) ``` ![image](9b8671a5-b529-41dc-b951-b29a8b29da64.png) ![image](11c03c2c-39ac-442b-9c1a-67e1e0a2aea7.png) ![image](ec197c72-f8ba-4f79-81fa-ceaf533cb5e3.png) ## Applications This dataset can be used for various applications, such as: - Training and evaluating object detection models for mobile UI designs. - Identifying design patterns and trends to aid UI designers and developers in creating high-quality mobile app UIs. - Enhancing the automation process in generating UI design templates. - Improving image recognition and analysis in the field of mobile UI design.