Paolo-Fraccaro
commited on
Commit
•
b82b44c
1
Parent(s):
9006c68
correct desc
Browse files
app.py
CHANGED
@@ -397,7 +397,7 @@ with gr.Blocks() as demo:
|
|
397 |
gr.Markdown(value='# Prithvi image reconstruction demo')
|
398 |
gr.Markdown(value='''Prithvi is a first-of-its-kind temporal Vision transformer pretrained by the IBM and NASA team on continental US Harmonised Landsat Sentinel 2 (HLS) data. Particularly, the model adopts a self-supervised encoder developed with a ViT architecture and Masked AutoEncoder learning strategy, with a MSE as a loss function. The model includes spatial attention across multiple patchies and also temporal attention for each patch. More info about the model and its weights are available [here](https://huggingface.co/ibm-nasa-geospatial/Prithvi-100M).\n
|
399 |
This demo showcases the image reconstracting over three timestamps, with the user providing a set of three HLS images and the model randomly masking out some proportion of the images and then reconstructing them based on the not masked portion of the images.\n
|
400 |
-
The user needs to provide three HLS geotiff images, including the following channels in reflectance units: Blue, Green, Red,
|
401 |
''')
|
402 |
with gr.Row():
|
403 |
with gr.Column():
|
|
|
397 |
gr.Markdown(value='# Prithvi image reconstruction demo')
|
398 |
gr.Markdown(value='''Prithvi is a first-of-its-kind temporal Vision transformer pretrained by the IBM and NASA team on continental US Harmonised Landsat Sentinel 2 (HLS) data. Particularly, the model adopts a self-supervised encoder developed with a ViT architecture and Masked AutoEncoder learning strategy, with a MSE as a loss function. The model includes spatial attention across multiple patchies and also temporal attention for each patch. More info about the model and its weights are available [here](https://huggingface.co/ibm-nasa-geospatial/Prithvi-100M).\n
|
399 |
This demo showcases the image reconstracting over three timestamps, with the user providing a set of three HLS images and the model randomly masking out some proportion of the images and then reconstructing them based on the not masked portion of the images.\n
|
400 |
+
The user needs to provide three HLS geotiff images, including the following channels in reflectance units: Blue, Green, Red, Narrow NIR, SWIR, SWIR 2.
|
401 |
''')
|
402 |
with gr.Row():
|
403 |
with gr.Column():
|