csaybar rwilliams commited on
Commit
0c6c7fe
1 Parent(s): 2c7ffd4

Improving Dataset Handling for Sentinel-1 and Sentinel-2 Images (#1)

Browse files

- Improving Dataset Handling for Sentinel-1 and Sentinel-2 Images (e9615edd5bd54edb0850902cdd4ef517f6969d7b)


Co-authored-by: Raiden Williams <rwilliams@users.noreply.huggingface.co>

Files changed (1) hide show
  1. README.md +38 -8
README.md CHANGED
@@ -97,22 +97,53 @@ Ready to start using **[CloudSEN12](https://cloudsen12.github.io/)**?
97
 
98
  <br>
99
 
100
- ### **np.memmap shape information**
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
101
 
102
  <br>
103
 
 
104
  **train shape: (8490, 512, 512)**
105
  <br>
106
  **val shape: (535, 512, 512)**
107
  <br>
108
  **test shape: (975, 512, 512)**
109
-
110
  <br>
111
-
112
- ### **Example**
113
-
114
- <br>
115
-
116
  ```py
117
  import numpy as np
118
 
@@ -135,7 +166,6 @@ y = np.memmap('test/manual_hq.dat', dtype='int8', mode='r', shape=test_shape)
135
  <br>
136
 
137
 
138
-
139
  This work has been partially supported by the Spanish Ministry of Science and Innovation project
140
  PID2019-109026RB-I00 (MINECO-ERDF) and the Austrian Space Applications Programme within the
141
  **[SemantiX project](https://austria-in-space.at/en/projects/2019/semantix.php)**.
 
97
 
98
  <br>
99
 
100
+ <be>
101
+
102
+ # **Dataset information, working with np.memmap:**
103
+
104
+ Sentinel-1 and Sentinel-2 collect images that span an area of 5090 x 5090 meters at 10 meters per pixel.
105
+ This results in 509 x 509 pixel images, presenting a challenge.
106
+
107
+ **Given each layer is a two-dimensional matrix, true image data is held from pixel (1,1) to (509,509)**
108
+
109
+ The subsequent images have been padded with three pixels around the image to make the images 512 x 512, a size that most models accept.
110
+
111
+ To give a visual representation of where the padding has been added:
112
+ x marks blank pixels stored as black (255)
113
+
114
+ xxxxxxxxxxxxxx
115
+ x xx
116
+ x xx
117
+ x xx
118
+ x xx
119
+ x xx
120
+ xxxxxxxxxxxxxx
121
+ xxxxxxxxxxxxxx
122
+
123
+ The effects of the padding can be mitigated by adding a random crop within (1,1) to (509, 509)
124
+ or completing a center crop to the desired size for network architecture.
125
+
126
+ ### The current split of image data is into three categories:
127
+
128
+ - Training: 84.90 % of total
129
+ - Validation: 5.35 % of total
130
+ - Testing: 9.75 % of total
131
+
132
+ For the recomposition of the data to take random samples of all 10,000 available images,
133
+ we can combine the np.memmap objects and take random selections at the beginning of each trial,
134
+ selecting random samples of the 10,000 images based on the desired percentage of the total data available.
135
+
136
+ This approach ensures the mitigation of training bias based on the original selection of images for each category.
137
 
138
  <br>
139
 
140
+ ### **Example**
141
  **train shape: (8490, 512, 512)**
142
  <br>
143
  **val shape: (535, 512, 512)**
144
  <br>
145
  **test shape: (975, 512, 512)**
 
146
  <br>
 
 
 
 
 
147
  ```py
148
  import numpy as np
149
 
 
166
  <br>
167
 
168
 
 
169
  This work has been partially supported by the Spanish Ministry of Science and Innovation project
170
  PID2019-109026RB-I00 (MINECO-ERDF) and the Austrian Space Applications Programme within the
171
  **[SemantiX project](https://austria-in-space.at/en/projects/2019/semantix.php)**.