Ryan Avery
update install
2d0fbea
metadata
license: apache-2.0
language:
  - en
tags:
  - Pytorch
  - mmsegmentation
  - segmentation
  - burn scars
  - Geospatial
  - Foundation model
datasets:
  - ibm-nasa-geospatial/hls_burn_scars
metrics:
  - accuracy
  - IoU
  - F1 Score

Model and Inputs

The pretrained Prithvi-100m parameter model is used for finetuning over the Burn Scar task on HLS data. The finetuning expected an input tile of 512x512x6, where 512 is the height and width and 6 is the number of bands. The bands are

  1. Blue
  2. Green
  3. Red
  4. Narrow NIR
  5. SWIR 1
  6. SWIR 2

Code

Code for Finetuning is available through github

Configuration used for finetuning is available through config

To run inference, first install dependencies

mamba create -n prithvi-burn-scar python=3.10 pycocotools ncurses
mamba activate prithvi-burn-scar
pip install --upgrade pip && \
    pip install -r requirements.txt && \
    mim install mmcv-full==1.5.0

Instructions for downloading from HuggingFace datasets

  1. Create account on https://huggingface.co/join

  2. Install git following https://git-scm.com/downloads

  3. Install git-lfs with sudo apt install git-lfs and git lfs install

  4. Run the following command to download the HLS datasets. You may need to enter your HuggingFace username/password to do the git clone.

    mkdir -p data
    cd data/
    git clone https://huggingface.co/datasets/ibm-nasa-geospatial/hls_burn_scars burn_scars
    tar -xzvf burn_scars/hls_burn_scars.tar.gz -C ./
    

With the datasets and the environment, you can now run the inference script.

python burn_scar_batch_inference_script.py \
-config burn_scars_Prithvi_100M.py \
-ckpt burn_scars_Prithvi_100M.pth \
-input data/burn_scars/validation \
-output data/burn_scars/inference_output \
-input_type tif

Results