parquet-converter commited on
Commit
3d890ae
1 Parent(s): c0f70ef

Update parquet files

Browse files
README.dataset.txt DELETED
@@ -1,6 +0,0 @@
1
- # Buildings Instance Segmentation > raw-images
2
- https://universe.roboflow.com/roboflow-universe-projects/buildings-instance-segmentation
3
-
4
- Provided by a Roboflow user
5
- License: CC BY 4.0
6
-
 
 
 
 
 
 
 
README.md DELETED
@@ -1,97 +0,0 @@
1
- ---
2
- task_categories:
3
- - image-segmentation
4
- tags:
5
- - roboflow
6
- - roboflow2huggingface
7
- - Aerial
8
- - Logistics
9
- - Construction
10
- - Damage Risk
11
- - Other
12
- ---
13
-
14
- <div align="center">
15
- <img width="640" alt="keremberke/satellite-building-segmentation" src="https://huggingface.co/datasets/keremberke/satellite-building-segmentation/resolve/main/thumbnail.jpg">
16
- </div>
17
-
18
- ### Dataset Labels
19
-
20
- ```
21
- ['building']
22
- ```
23
-
24
-
25
- ### Number of Images
26
-
27
- ```json
28
- {'train': 6764, 'valid': 1934, 'test': 967}
29
- ```
30
-
31
-
32
- ### How to Use
33
-
34
- - Install [datasets](https://pypi.org/project/datasets/):
35
-
36
- ```bash
37
- pip install datasets
38
- ```
39
-
40
- - Load the dataset:
41
-
42
- ```python
43
- from datasets import load_dataset
44
-
45
- ds = load_dataset("keremberke/satellite-building-segmentation", name="full")
46
- example = ds['train'][0]
47
- ```
48
-
49
- ### Roboflow Dataset Page
50
- [https://universe.roboflow.com/roboflow-universe-projects/buildings-instance-segmentation/dataset/1](https://universe.roboflow.com/roboflow-universe-projects/buildings-instance-segmentation/dataset/1?ref=roboflow2huggingface)
51
-
52
- ### Citation
53
-
54
- ```
55
- @misc{ buildings-instance-segmentation_dataset,
56
- title = { Buildings Instance Segmentation Dataset },
57
- type = { Open Source Dataset },
58
- author = { Roboflow Universe Projects },
59
- howpublished = { \\url{ https://universe.roboflow.com/roboflow-universe-projects/buildings-instance-segmentation } },
60
- url = { https://universe.roboflow.com/roboflow-universe-projects/buildings-instance-segmentation },
61
- journal = { Roboflow Universe },
62
- publisher = { Roboflow },
63
- year = { 2023 },
64
- month = { jan },
65
- note = { visited on 2023-01-16 },
66
- }
67
- ```
68
-
69
- ### License
70
- CC BY 4.0
71
-
72
- ### Dataset Summary
73
- This dataset was exported via roboflow.com on January 16, 2023 at 9:06 PM GMT
74
-
75
- Roboflow is an end-to-end computer vision platform that helps you
76
- * collaborate with your team on computer vision projects
77
- * collect & organize images
78
- * understand and search unstructured image data
79
- * annotate, and create datasets
80
- * export, train, and deploy computer vision models
81
- * use active learning to improve your dataset over time
82
-
83
- For state of the art Computer Vision training notebooks you can use with this dataset,
84
- visit https://github.com/roboflow/notebooks
85
-
86
- To find over 100k other datasets and pre-trained models, visit https://universe.roboflow.com
87
-
88
- The dataset includes 9665 images.
89
- Buildings are annotated in COCO format.
90
-
91
- The following pre-processing was applied to each image:
92
- * Auto-orientation of pixel data (with EXIF-orientation stripping)
93
-
94
- No image augmentation techniques were applied.
95
-
96
-
97
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
README.roboflow.txt DELETED
@@ -1,28 +0,0 @@
1
-
2
- Buildings Instance Segmentation - v1 raw-images
3
- ==============================
4
-
5
- This dataset was exported via roboflow.com on January 16, 2023 at 9:06 PM GMT
6
-
7
- Roboflow is an end-to-end computer vision platform that helps you
8
- * collaborate with your team on computer vision projects
9
- * collect & organize images
10
- * understand and search unstructured image data
11
- * annotate, and create datasets
12
- * export, train, and deploy computer vision models
13
- * use active learning to improve your dataset over time
14
-
15
- For state of the art Computer Vision training notebooks you can use with this dataset,
16
- visit https://github.com/roboflow/notebooks
17
-
18
- To find over 100k other datasets and pre-trained models, visit https://universe.roboflow.com
19
-
20
- The dataset includes 9665 images.
21
- Buildings are annotated in COCO format.
22
-
23
- The following pre-processing was applied to each image:
24
- * Auto-orientation of pixel data (with EXIF-orientation stripping)
25
-
26
- No image augmentation techniques were applied.
27
-
28
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
data/valid.zip → full/satellite-building-segmentation-test.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:dbbc6828ef700233c4013a73a3d8aa949ce1cfd3a7734fd0834d351a8a46d398
3
- size 98753011
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cb7ed4edc3e5f485bbcfd05c3aed774ad95ce6e6b43d2edd08c61399ea7778fd
3
+ size 50015932
data/train.zip → full/satellite-building-segmentation-train.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:c285d673a81534507d6261230ac004784ba6dbc0ab5904f8d957cc385c4f4db4
3
- size 345338954
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:efccd83471ba30a9d086f29d28b251d02b5e104fa796edfa959309d756158531
3
+ size 346461541
data/test.zip → full/satellite-building-segmentation-validation.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:4e6f6cca4c6fa01db541330f196bbe32c8ca1e8fe25f9acf4b3d55430f2866d5
3
- size 49805610
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6597c3232c62b13a58f63735dc010c4dc30e2605b2e25a6986a44046c99f8083
3
+ size 99159449
thumbnail.jpg → mini/satellite-building-segmentation-test.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:83975d9000394e18269e7c8849c5ebfcc0c80e91a5f7918f2371909adf4bc042
3
- size 136686
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:37b33895b1bb673846b574dc56b3833581697c81bd3146e8f14441bf5cb7a7d8
3
+ size 137157
data/valid-mini.zip → mini/satellite-building-segmentation-train.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:4997e150e79a7a7ed4bfe1863f2b6ea8c8bc8a633a4bce1b618ec07138fe2688
3
- size 130275
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:37b33895b1bb673846b574dc56b3833581697c81bd3146e8f14441bf5cb7a7d8
3
+ size 137157
mini/satellite-building-segmentation-validation.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:37b33895b1bb673846b574dc56b3833581697c81bd3146e8f14441bf5cb7a7d8
3
+ size 137157
satellite-building-segmentation.py DELETED
@@ -1,154 +0,0 @@
1
- import collections
2
- import json
3
- import os
4
-
5
- import datasets
6
-
7
-
8
- _HOMEPAGE = "https://universe.roboflow.com/roboflow-universe-projects/buildings-instance-segmentation/dataset/1"
9
- _LICENSE = "CC BY 4.0"
10
- _CITATION = """\
11
- @misc{ buildings-instance-segmentation_dataset,
12
- title = { Buildings Instance Segmentation Dataset },
13
- type = { Open Source Dataset },
14
- author = { Roboflow Universe Projects },
15
- howpublished = { \\url{ https://universe.roboflow.com/roboflow-universe-projects/buildings-instance-segmentation } },
16
- url = { https://universe.roboflow.com/roboflow-universe-projects/buildings-instance-segmentation },
17
- journal = { Roboflow Universe },
18
- publisher = { Roboflow },
19
- year = { 2023 },
20
- month = { jan },
21
- note = { visited on 2023-01-16 },
22
- }
23
- """
24
- _CATEGORIES = ['building']
25
- _ANNOTATION_FILENAME = "_annotations.coco.json"
26
-
27
-
28
- class SATELLITEBUILDINGSEGMENTATIONConfig(datasets.BuilderConfig):
29
- """Builder Config for satellite-building-segmentation"""
30
-
31
- def __init__(self, data_urls, **kwargs):
32
- """
33
- BuilderConfig for satellite-building-segmentation.
34
-
35
- Args:
36
- data_urls: `dict`, name to url to download the zip file from.
37
- **kwargs: keyword arguments forwarded to super.
38
- """
39
- super(SATELLITEBUILDINGSEGMENTATIONConfig, self).__init__(version=datasets.Version("1.0.0"), **kwargs)
40
- self.data_urls = data_urls
41
-
42
-
43
- class SATELLITEBUILDINGSEGMENTATION(datasets.GeneratorBasedBuilder):
44
- """satellite-building-segmentation instance segmentation dataset"""
45
-
46
- VERSION = datasets.Version("1.0.0")
47
- BUILDER_CONFIGS = [
48
- SATELLITEBUILDINGSEGMENTATIONConfig(
49
- name="full",
50
- description="Full version of satellite-building-segmentation dataset.",
51
- data_urls={
52
- "train": "https://huggingface.co/datasets/keremberke/satellite-building-segmentation/resolve/main/data/train.zip",
53
- "validation": "https://huggingface.co/datasets/keremberke/satellite-building-segmentation/resolve/main/data/valid.zip",
54
- "test": "https://huggingface.co/datasets/keremberke/satellite-building-segmentation/resolve/main/data/test.zip",
55
- },
56
- ),
57
- SATELLITEBUILDINGSEGMENTATIONConfig(
58
- name="mini",
59
- description="Mini version of satellite-building-segmentation dataset.",
60
- data_urls={
61
- "train": "https://huggingface.co/datasets/keremberke/satellite-building-segmentation/resolve/main/data/valid-mini.zip",
62
- "validation": "https://huggingface.co/datasets/keremberke/satellite-building-segmentation/resolve/main/data/valid-mini.zip",
63
- "test": "https://huggingface.co/datasets/keremberke/satellite-building-segmentation/resolve/main/data/valid-mini.zip",
64
- },
65
- )
66
- ]
67
-
68
- def _info(self):
69
- features = datasets.Features(
70
- {
71
- "image_id": datasets.Value("int64"),
72
- "image": datasets.Image(),
73
- "width": datasets.Value("int32"),
74
- "height": datasets.Value("int32"),
75
- "objects": datasets.Sequence(
76
- {
77
- "id": datasets.Value("int64"),
78
- "area": datasets.Value("int64"),
79
- "bbox": datasets.Sequence(datasets.Value("float32"), length=4),
80
- "segmentation": datasets.Sequence(datasets.Sequence(datasets.Value("float32"))),
81
- "category": datasets.ClassLabel(names=_CATEGORIES),
82
- }
83
- ),
84
- }
85
- )
86
- return datasets.DatasetInfo(
87
- features=features,
88
- homepage=_HOMEPAGE,
89
- citation=_CITATION,
90
- license=_LICENSE,
91
- )
92
-
93
- def _split_generators(self, dl_manager):
94
- data_files = dl_manager.download_and_extract(self.config.data_urls)
95
- return [
96
- datasets.SplitGenerator(
97
- name=datasets.Split.TRAIN,
98
- gen_kwargs={
99
- "folder_dir": data_files["train"],
100
- },
101
- ),
102
- datasets.SplitGenerator(
103
- name=datasets.Split.VALIDATION,
104
- gen_kwargs={
105
- "folder_dir": data_files["validation"],
106
- },
107
- ),
108
- datasets.SplitGenerator(
109
- name=datasets.Split.TEST,
110
- gen_kwargs={
111
- "folder_dir": data_files["test"],
112
- },
113
- ),
114
- ]
115
-
116
- def _generate_examples(self, folder_dir):
117
- def process_annot(annot, category_id_to_category):
118
- return {
119
- "id": annot["id"],
120
- "area": annot["area"],
121
- "bbox": annot["bbox"],
122
- "segmentation": annot["segmentation"],
123
- "category": category_id_to_category[annot["category_id"]],
124
- }
125
-
126
- image_id_to_image = {}
127
- idx = 0
128
-
129
- annotation_filepath = os.path.join(folder_dir, _ANNOTATION_FILENAME)
130
- with open(annotation_filepath, "r") as f:
131
- annotations = json.load(f)
132
- category_id_to_category = {category["id"]: category["name"] for category in annotations["categories"]}
133
- image_id_to_annotations = collections.defaultdict(list)
134
- for annot in annotations["annotations"]:
135
- image_id_to_annotations[annot["image_id"]].append(annot)
136
- filename_to_image = {image["file_name"]: image for image in annotations["images"]}
137
-
138
- for filename in os.listdir(folder_dir):
139
- filepath = os.path.join(folder_dir, filename)
140
- if filename in filename_to_image:
141
- image = filename_to_image[filename]
142
- objects = [
143
- process_annot(annot, category_id_to_category) for annot in image_id_to_annotations[image["id"]]
144
- ]
145
- with open(filepath, "rb") as f:
146
- image_bytes = f.read()
147
- yield idx, {
148
- "image_id": image["id"],
149
- "image": {"path": filepath, "bytes": image_bytes},
150
- "width": image["width"],
151
- "height": image["height"],
152
- "objects": objects,
153
- }
154
- idx += 1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
split_name_to_num_samples.json DELETED
@@ -1 +0,0 @@
1
- {"train": 6764, "valid": 1934, "test": 967}