Adib Hasan commited on
Commit
33c2e4b
·
2 Parent(s): 31dc0c9 3c0d1b0

Merge branch 'main' of https://huggingface.co/datasets/notadib/NASA-Power-Daily-Weather

Browse files
Files changed (1) hide show
  1. README.md +22 -15
README.md CHANGED
@@ -2,7 +2,7 @@
2
  license: mit
3
  ---
4
 
5
- # Dataset Card for NASA Power Daily Weather Dataset
6
 
7
  <!-- Provide a quick summary of the dataset. -->
8
 
@@ -36,7 +36,7 @@ Here are the descriptions of the 31 weather variables with their units:
36
  | Profile Soil Moisture (0 to 1) | GWETPROF | 0 to 1 |
37
  | Snow Depth | SNODP | cm |
38
  | Dew/Frost Point at 2 Meters | T2MDEW | C |
39
- | Cloud Amount | CLOUD_AMT | % |
40
  | Evaporation Land | EVLAND | kg/m^2/s * 10^6 |
41
  | Wet Bulb Temperature at 2 Meters | T2MWET | C |
42
  | Land Snowcover Fraction | FRSNO | 0 to 1 |
@@ -46,7 +46,7 @@ Here are the descriptions of the 31 weather variables with their units:
46
  | Precipitable Water | PW | cm |
47
  | Surface Roughness | Z0M | m |
48
  | Surface Air Density | RHOA | kg/m^3 |
49
- | Relative Humidity at 2 Meters | RH2M | % |
50
  | Cooling Degree Days Above 18.3 C | CDD18_3 | days |
51
  | Heating Degree Days Below 18.3 C | HDD18_3 | days |
52
  | Total Column Ozone | TO3 | Dobson units |
@@ -58,8 +58,10 @@ Here are the descriptions of the 31 weather variables with their units:
58
 
59
  ### Grid coordinates for the regions
60
 
61
- the indices in the dataset refer to the order of these coordinates. For instance `usa_0` refers to
62
- the first rectangle of the USA in the list below.
 
 
63
 
64
  #### USA
65
 
@@ -196,31 +198,36 @@ the first rectangle of the USA in the list below.
196
 
197
  <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
198
 
199
- **raw:** unprocessed data dump from NASA pPower API in the JSON format.
200
 
201
- **csvs:** Processed data in the csv format.
202
 
203
- **pytorch:** Pytorch TensorDataset objects ready to be used in training. Each sample is a tuple of the following data:
204
- * weather measurements (shape 365 x 31)
205
- * coordinates (shape 1 x 2)
206
- * index (1 x 2). the first number is the index of the current row since Jan 1, 1984. The second number is the temporal granularity, which is
207
- * 1 for daily data, 7 for weekly data and 30 for monthly data.
 
 
 
208
 
209
  ## Dataset Creation
210
 
211
  ### Source Data
212
 
213
  <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
214
- NASA Power API daily weather measurments
215
 
216
  #### Data Processing
217
 
218
  <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
219
 
 
 
220
  - Missing values were backfilled.
221
- - Leap year extra day was omitted. So, each year of the daily dataset has 365 days. Similarly, each year of weekly dataset has 52 weeks, and the monthly dataset has 12 columns.
222
  - Data was pivoted. So each measurement has x columns where x is either 365, 52, or 12.
223
- - `pytorch` data was standardized using the mean and std of the weather over the continental united states.
224
 
225
  ## Citation [optional]
226
 
 
2
  license: mit
3
  ---
4
 
5
+ # NASA Power Weather Data over North, Central, and South America from 1984 to 2022
6
 
7
  <!-- Provide a quick summary of the dataset. -->
8
 
 
36
  | Profile Soil Moisture (0 to 1) | GWETPROF | 0 to 1 |
37
  | Snow Depth | SNODP | cm |
38
  | Dew/Frost Point at 2 Meters | T2MDEW | C |
39
+ | Cloud Amount | CLOUD_AMT | 0 to 1 |
40
  | Evaporation Land | EVLAND | kg/m^2/s * 10^6 |
41
  | Wet Bulb Temperature at 2 Meters | T2MWET | C |
42
  | Land Snowcover Fraction | FRSNO | 0 to 1 |
 
46
  | Precipitable Water | PW | cm |
47
  | Surface Roughness | Z0M | m |
48
  | Surface Air Density | RHOA | kg/m^3 |
49
+ | Relative Humidity at 2 Meters | RH2M | 0 to 1 |
50
  | Cooling Degree Days Above 18.3 C | CDD18_3 | days |
51
  | Heating Degree Days Below 18.3 C | HDD18_3 | days |
52
  | Total Column Ozone | TO3 | Dobson units |
 
58
 
59
  ### Grid coordinates for the regions
60
 
61
+ the location indices in the dataset refer to the order of these coordinates. For instance `usa_0` refers to
62
+ the first rectangle of the USA in the list below. For the pytorch data, location indices 0-34 refer to the data from the USA
63
+ grid, 35-110 refer to the data from the South America grid and the rest refer to the data from the Central America
64
+ grid.
65
 
66
  #### USA
67
 
 
198
 
199
  <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
200
 
201
+ **raw:** unprocessed data dump from NASA Power API in the JSON format.
202
 
203
+ **csvs:** Processed data in the CSV format.
204
 
205
+ **pytorch:** Pytorch TensorDataset objects ready to be used in training. All of the daily, weekly, and monthly data have been reshaped
206
+ so that the **sequence length is 365**. Each sample is a tuple of the following data:
207
+ * weather measurements (shape `sequence_length x 31`)
208
+ * coordinates (shape `1 x 2`)
209
+ * index (`1 x 2`). the first number is the temporal index of the current row since Jan 1, 1984. The second number is the temporal granularity,
210
+ or the spacing between indices, which is 1 for daily data, 7 for weekly data, and 30 for monthly data. Note: this means the daily data
211
+ contains 1 year of data in each row, weekly data contains 7 years of data in each row (`7 * 52 = 364`) and monthly data contains 12 years of
212
+ data (`12 * 30 = 360`).
213
 
214
  ## Dataset Creation
215
 
216
  ### Source Data
217
 
218
  <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
219
+ NASA Power API daily weather measurements. The data comes from multiple sources, but mostly satellite data.
220
 
221
  #### Data Processing
222
 
223
  <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
224
 
225
+ The `raw` data is in the JSON format and unprocessed. The `csvs` and the `pytorch` data are processed in the following manner:
226
+
227
  - Missing values were backfilled.
228
+ - Leap year extra day was omitted. So, each year of the daily dataset has 365 days. Similarly, each year of the weekly dataset has 52 weeks, and the monthly dataset has 12 columns.
229
  - Data was pivoted. So each measurement has x columns where x is either 365, 52, or 12.
230
+ - `pytorch` data was standardized using the mean and std of the weather over the continental United States.
231
 
232
  ## Citation [optional]
233