hanxLi commited on
Commit
4207411
1 Parent(s): 52ec00b

Initial creation of README

Browse files
Files changed (1) hide show
  1. README.md +43 -0
README.md CHANGED
@@ -1,3 +1,46 @@
1
  ---
2
  license: apache-2.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
+ language:
4
+ - en
5
+ tags:
6
+ - Pytorch
7
+ - mmsegmentation
8
+ - segmentation
9
+ - Crop Classification
10
+ - Multi Temporal
11
+ - Geospatial
12
+ - Foundation model
13
+ datasets:
14
+ - ibm-nasa-geospatial/hls_burn_scars
15
+ metrics:
16
+ - accuracy
17
+ - IoU
18
  ---
19
+ ### Model and Inputs
20
+ The pretrained [Prithvi-100m](https://huggingface.co/ibm-nasa-geospatial/Prithvi-100M/blob/main/README.md) parameter model is finetuned to classify crop and other land cover types based off HLS data and CDL labels from the [HLS Burn Scar Scenes dataset](https://huggingface.co/datasets/ibm-nasa-geospatial/hls_burn_scars).
21
+ This dataset includes input chips of 224x224x18, where 224 is the height and width and 18 is combined with 6 bands of 3 time-steps.
22
+ The bands are:
23
+
24
+ 1. Blue
25
+ 2. Green
26
+ 3. Red
27
+ 4. Narrow NIR
28
+ 5. SWIR 1
29
+ 6. SWIR 2
30
+
31
+ While the Prithvi-100m was pretrained with 3 timesteps, this task utilize the capibility of multi-temporal data input adapted from the pretrained foundation model and provide more generalized and
32
+
33
+ ### Code
34
+ Code for Finetuning is available through [github](https://github.com/NASA-IMPACT/hls-foundation-os/tree/main/fine-tuning-examples)
35
+
36
+ Configuration used for finetuning is available through [config](https://github.com/NASA-IMPACT/hls-foundation-os/blob/main/fine-tuning-examples/configs/firescars_config.py
37
+ ).
38
+
39
+ ### Results
40
+ The experiment by running the mmseg stack for 80 epochs using the above config led to an IoU of **0.72** on the burn scar class and **0.96** overall accuracy. It is noteworthy that this leads to a resonably good model, but further developement will most likely improve performance.
41
+
42
+ ### Inference and demo
43
+ There is an inference script that allows to run the hls-cdl crop classification model for inference on HLS images. These input have to be geotiff format, including 18 bands for 3 time-step, and each time-step includes the channels described above (Blue, Green, Red, Narrow NIR, SWIR, SWIR 2) in order. There is also a **demo** that leverages the same code **[here](https://huggingface.co/spaces/ibm-nasa-geospatial/Prithvi-100M-Burn-scars-demo)**.
44
+
45
+
46
+