File size: 4,004 Bytes
4b667e1
 
4076a1a
 
 
 
e4a3640
3557d80
 
e4a3640
a7f8c2e
3557d80
ffd5a2f
3557d80
 
8dec2a1
3557d80
 
a7f8c2e
3557d80
1ecae73
 
 
 
 
 
3557d80
e4a3640
 
1ecae73
e4a3640
 
1ecae73
97fd090
1ecae73
abf9754
c68e1ae
 
 
 
 
8fe21c3
1ecae73
a92c917
5d705b1
 
 
e4b4753
5d705b1
 
e4b4753
01286ec
5d705b1
 
e4b4753
 
5d705b1
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
---
license: apache-2.0
tags:
- Pytorch
- Geospatial
- Temporal ViT
- Vit
---

### Model and Inputs
Prithvi is a first-of-its-kind temporal Vision transformer pre-trained by the IBM and NASA team on contiguous US Harmonised Landsat Sentinel 2 (HLS) data. Particularly, the model adopts a self-supervised encoder developed with a ViT architecture and Masked AutoEncoder learning strategy with an L1 loss function. The model includes spatial attention across multiple patches and also temporal attention for each patch. 

![](GFM.png)

The model expects remote sensing data in a video format (B, C, T, H, W). Note that the temporal dimension is very important here and not present in most 
other works around remote sensing modeling. Being able to handle a time series of remote sensing images can benefit a variety of downstream tasks. The model can also handle static images, which can be simply fed into the model with T=1.

### Pre-training
The model was pre-trained with NASA's HLS2 L30 product (30m granularity) from the contiguous United States. The bands that were used are the following: 

1. Blue
2.  Green
3.  Red
4.  Narrow NIR
5.  SWIR 1
6.  SWIR 2

### Code
The model follows the [original mae repo](https://github.com/facebookresearch/mae) with some modifications including:

1. replace 2D patch embed with 3D patch embed;
2. replace 2D positional embed with 3D positional embed;
3. replace 2D patchify and unpatchify with 3D.
4. adding infrared bands besides RGB

### Inference and demo
There is an inference script (`Prithvi_run_inference.py`) that allows to run the image reconstruction on a set of three HLS images (see example below). These images have to be geotiff format, including the channels described above (Blue, Green, Red, Narrow NIR, SWIR 1, SWIR 2) in reflectance units. There is also a **demo** that leverages the same code [here](https://huggingface.co/spaces/ibm-nasa-geospatial/Prithvi-100M-demo).

```
python Prithvi_run_inference.py --data_files t1.tif t2.tif t3.tif --yaml_file_path /path/to/yaml/Prithvi_100.yaml --checkpoint /path/to/checkpoint/Prithvi_100.pth --output_dir /path/to/out/dir/ --mask_ratio 0.5
```

### Finetuning examples
Examples of finetuning the model for image segmentation using the mmsegmentation library are available through Hugging Face (e.g. [burn scars detection](https://huggingface.co/ibm-nasa-geospatial/Prithvi-100M-burn-scar) and [multi temporal crop classification](https://huggingface.co/ibm-nasa-geospatial/Prithvi-100M-multi-temporal-crop-classification)), with the code used for the experiments available on [github](https://github.com/NASA-IMPACT/hls-foundation-os/tree/main/fine-tuning-examples). This also contains instructions to finetune the model for flood detection on the popular open access [sen1floods11 dataset](https://github.com/cloudtostreet/Sen1Floods11).

## Citation

If this model helped your research, please cite `Prithvi-100M` in your publications. Here is an example BibTeX entry:

```
@misc{Prithvi-100M,
    author = {Roy, Sujit and Ankur, Kumar and Phillips, Christopher and Ramasubramanian, Muthukumaran and Gurung, Iksha and Ramachandran, Rahul and Maskey, Manil and Olofossen, Pontus and Fancher, Elizabeth and Lee, Tsengdar and Murphy, Kevin and Duffy, Dan and Little, Mike and Jakubik, Johannes and Chu, Linsong and Fraccaro, Paolo and Das, Ranjini and Das, Kamal and Kimura, Daiki and Simumba, Naomi and Szwarcman, Daniela and Michal, Michal and Weldemariam, Kommy and Zadrozny, Bianca and Ganti, Raghu and Costa, Carlos and Alemohammad, Hamed and Cecil, Michael and Li, Steve and Khallaghi, Sam and Godwin, Denys and Ahmadi, Maryam and Kordi, Fatemeh and Saux, Bertrand and Pastick, Neal and Doucette, Peter and Fleckenstein, Rylie and Luanga, Dalton and Corvin, Alex and Granger, Erwan},
    doi    = {https://huggingface.co/ibm-nasa-geospatial/Prithvi-100M},
    month  = aug,
    title  = {{Prithvi-100M}},
    url    = {https://github.com/nasa-impact/Prithvi-100M},
    year   = {2023}
}
```