Datasets:

Modalities:
Image
Languages:
English
DOI:
Libraries:
Datasets
License:

Details on the Architecture

#6
by AnuIdame - opened

As I see the datasets used to train the models such as hls_burn_scars and crop classification are different. So are you using different networks/ models to train them or use the same model for both?

IBM NASA Geospatial org
β€’
edited Sep 27, 2023

Hi @AnuIdame . The initial backbone for these models is the same Vision Transformer encoder, pre-trained on a large unlabeled dataset as a Masked Auto Encoder.

We take the encoder for this model and attach a new decoder to it, which is finetuned for each specific task. Depending on the type of task, different decoder architectures may perform better.

During this finetuning process, the weights of the backbone can be kept fixed (using the frozen_backbone=True option), or they can also be tuned.

All of this complexity is abstracted away through the config files used by the MMSegmentation framework.

Sign up or log in to comment