hanxLi commited on
Commit
7b7de30
1 Parent(s): 5769604

Update README to reflect correct links; Added finetune results to README

Browse files
Files changed (1) hide show
  1. README.md +33 -7
README.md CHANGED
@@ -17,9 +17,9 @@ metrics:
17
  - IoU
18
  ---
19
  ### Model and Inputs
20
- The pretrained [Prithvi-100m](https://huggingface.co/ibm-nasa-geospatial/Prithvi-100M/blob/main/README.md) parameter model is finetuned to classify crop and other land cover types based off HLS data and CDL labels from the [HLS Burn Scar Scenes dataset](https://huggingface.co/datasets/ibm-nasa-geospatial/hls_burn_scars).
21
- This dataset includes input chips of 224x224x18, where 224 is the height and width and 18 is combined with 6 bands of 3 time-steps.
22
- The bands are:
23
 
24
  1. Blue
25
  2. Green
@@ -28,16 +28,42 @@ The bands are:
28
  5. SWIR 1
29
  6. SWIR 2
30
 
31
- While the Prithvi-100m was pretrained with 3 timesteps, this task utilize the capibility of multi-temporal data input adapted from the pretrained foundation model and provide more generalized and
 
 
 
 
32
 
33
  ### Code
34
  Code for Finetuning is available through [github](https://github.com/NASA-IMPACT/hls-foundation-os/tree/main/fine-tuning-examples)
35
 
36
- Configuration used for finetuning is available through [config](https://github.com/NASA-IMPACT/hls-foundation-os/blob/main/fine-tuning-examples/configs/firescars_config.py
37
- ).
38
 
39
  ### Results
40
- The experiment by running the mmseg stack for 80 epochs using the above config led to an IoU of **0.72** on the burn scar class and **0.96** overall accuracy. It is noteworthy that this leads to a resonably good model, but further developement will most likely improve performance.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
41
 
42
  ### Inference and demo
43
  There is an inference script that allows to run the hls-cdl crop classification model for inference on HLS images. These input have to be geotiff format, including 18 bands for 3 time-step, and each time-step includes the channels described above (Blue, Green, Red, Narrow NIR, SWIR, SWIR 2) in order. There is also a **demo** that leverages the same code **[here](https://huggingface.co/spaces/ibm-nasa-geospatial/Prithvi-100M-Burn-scars-demo)**.
 
17
  - IoU
18
  ---
19
  ### Model and Inputs
20
+ The pretrained [Prithvi-100m](https://huggingface.co/ibm-nasa-geospatial/Prithvi-100M/blob/main/README.md) parameter model is finetuned to classify crop and other land cover types based off HLS data and CDL labels from the [multi_temporal_crop_classification dataset](https://huggingface.co/datasets/ibm-nasa-geospatial/multi-temporal-crop-classification).
21
+
22
+ This dataset includes input chips of 224x224x18, where 224 is the height and width and 18 is combined with 6 bands of 3 time-steps. The bands are:
23
 
24
  1. Blue
25
  2. Green
 
28
  5. SWIR 1
29
  6. SWIR 2
30
 
31
+ Labels are from CDL(Crop Data Layer) and classified into 13 classes.
32
+
33
+ ![](multi_temporal_crop_classification.png)
34
+
35
+ The Prithvi-100m model was initially pretrained using a sequence length of 3 timesteps. For this task, we leverage the capacity for multi-temporal data input, which has been integrated from the foundational pretrained model. This adaptation allows us to achieve more generalized finetuning outcomes.
36
 
37
  ### Code
38
  Code for Finetuning is available through [github](https://github.com/NASA-IMPACT/hls-foundation-os/tree/main/fine-tuning-examples)
39
 
40
+ Configuration used for finetuning is available through [config](https://github.com/NASA-IMPACT/hls-foundation-os/blob/main/fine-tuning-examples/configs/multi_temporal_crop_classification.py).
 
41
 
42
  ### Results
43
+ The experiment by running the mmseg stack for 80 epochs using the above config led to the following result:
44
+
45
+ | **Classes** | **IoU**| **Acc**|
46
+ |:------------------:|:------:|:------:|
47
+ | Natural Vegetation | 0.3362 | 39.06% |
48
+ | Forest | 0.4362 | 65.88% |
49
+ | Corn | 0.4574 | 54.53% |
50
+ | Soybeans | 0.4682 | 62.25% |
51
+ | Wetlands | 0.3246 | 45.62% |
52
+ | Developed/Barren | 0.3077 | 49.1% |
53
+ | Open Water | 0.6181 | 90.04% |
54
+ | Winter Wheat | 0.4497 | 66.75% |
55
+ | Alfalfa | 0.2518 | 63.97% |
56
+ |Fallow/Idle Cropland| 0.328 | 54.46% |
57
+ | Cotton | 0.2679 | 66.37% |
58
+ | Sorghum | 0.2741 | 75.91% |
59
+ | Other | 0.2803 | 39.76% |
60
+
61
+ |**aAcc**|**mIoU**|**mAcc**|
62
+ |:------:|:------:|:------:|
63
+ | 54.32% | 0.3692 | 59.51% |
64
+
65
+ It is important to acknowledge that the CDL (Crop Data Layer) labels employed in this process are known to contain noise and are not entirely precise, thereby influencing the model's performance. Fine-tuning the model with more accurate labels is expected to further enhance its overall effectiveness, leading to improved results.
66
+
67
 
68
  ### Inference and demo
69
  There is an inference script that allows to run the hls-cdl crop classification model for inference on HLS images. These input have to be geotiff format, including 18 bands for 3 time-step, and each time-step includes the channels described above (Blue, Green, Red, Narrow NIR, SWIR, SWIR 2) in order. There is also a **demo** that leverages the same code **[here](https://huggingface.co/spaces/ibm-nasa-geospatial/Prithvi-100M-Burn-scars-demo)**.