File size: 2,029 Bytes
9b69290
 
 
 
 
 
 
 
 
 
 
fc7c6c4
 
 
 
 
 
32a41c8
 
 
dda58ce
9a88a25
32a41c8
 
 
 
 
 
f13bf02
dda58ce
 
32a41c8
 
 
c1a9a1c
dda58ce
32a41c8
 
e0768e4
dda58ce
 
b8079f4
dda58ce
32a41c8
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
---
license: apache-2.0
language:
- en
tags:
- Pytorch
- mmsegmentation
- segmentation
- burn scars
- Geospatial
- Foundation model
datasets:
- ibm-nasa-geospatial/hls_burn_scars
metrics:
- accuracy
- IoU
- F1 Score
---

### Model and Inputs
The pretrained [Prithvi-100m](https://huggingface.co/ibm-nasa-geospatial/Prithvi-100M/blob/main/README.md) parameter model is finetuned to detect Burn Scars on HLS data from the [HLS Burn Scar Scenes dataset](https://huggingface.co/datasets/ibm-nasa-geospatial/hls_burn_scars). This dataset includes input tiles of 512x512x6, where 512 is the height and width and 6 is the number of bands. The bands are:
 
1. Blue
2. Green
3. Red
4. Narrow NIR
5. SWIR 1
6. SWIR 2
![](burn_scar.png)
It is important to point out that the HLS Burn Scar Scenes dataset includes a single timestep, while the Prithvi-100m was pretrained with 3 timesteps. This highlights the flexibility of this model to adapt to different downstream tasks and requirements.

### Code
Code for Finetuning is available through [github](https://github.com/NASA-IMPACT/hls-foundation-os/tree/main/fine-tuning-examples)

Configuration used for finetuning is available through [config](https://github.com/NASA-IMPACT/hls-foundation-os/blob/main/fine-tuning-examples/configs/firescars_config.py
).

### Results
The experiment by running the mmseg stack for 50 epochs using the above config led to an IoU of **0.72** on the burn scar class and **0.96** overall accuracy. It is noteworthy that this leads to a resonably good model, but further developement will most likely improve performance.

### Inference and demo
The github repo includes an inference script that allows to run the burn scar model for inference on HLS images. These input have to be geotiff format, including the channels described above (Blue, Green, Red, Narrow NIR, SWIR, SWIR 2) in reflectance units [0-1]. There is also a **demo** that leverages the same code **[here](https://huggingface.co/spaces/ibm-nasa-geospatial/Prithvi-100M-Burn-scars-demo)**.