text
stringlengths
0
820
GSD of 50% and 100% of its native GSD.
a combined image (rather than independent low/high recon-
structions). In this case, when the high resolution component
is reconstructed, we do not use the low-resolution residual,
but rather, directly reconstruct the high resolution result. The
“Combined” entry combines the low and high resolution re-
sults instead of treating them as separate learning objectives.
The separate low/high resolution reconstructions obtain the
best performance and robustness to changes in scale.
5. Discussion
In this section, we share observations about Scale-MAE ,
sketch our vision for future work, and discuss high-level
questions about Scale-MAE .
Computational complexity. Scale-MAE requires a much
smaller decoder than vanilla MAE—instead of a decoder
depth of eight, Scale-MAE works well with a depth of three.
In fact, with 322.9M vs 329.5M parameters using ViT-
Large, Scale-MAE is smaller than vanilla MAE. However,
GPU memory usage for equal batch sizes are higher for
Scale-MAE since we reconstruct a higher resolution image
in the Scale-MAE Decoder.
Multi-spectrality and modality. Electro-optical (EO)
satellites, such as the ones comprising the datasets mentioned
in this work, capture light at different wavelengths. Each
wavelength has a different sensor, and each sensor can have a
different resolution. Scale-MAE requires input tensors to be
stacked to pass through the model. This means that we are
unable to use Scale-MAE when the input image’s bands are
all of different GSDs. Additionally, synthetic aperture radar
(SAR) imagery is another form of remote sensing where res-
olution varies across a single band. Extending Scale-MAE
to work with different resolution bands and modalities is
reserved for future work.
Can the Scale-MAE methodology be applied to other
backbones? Methods such as ConvNeXt [42] provide
competitive performance compared to Transformers. The
core components of our work can be integrated, with ad-
ditional work, into different architectures. The Laplacian
Decoder in Scale-MAE can be engineered to ingest convo-
Decoding Layers KNN 50% KNN 100%
1 76.0 78.4
2 77.9 80.4
3 78.1 80.7
4 77.5 80.0
8 77.7 78.9
Table 8. Ablation results indicating that fewer transformer layers in
the decoding stage tend to work better for Scale-MAE as determined
by a KNN classification on RESISC-45 at a relative GSD of 50%
and 100% of its native GSD.
Low Res High Res Combined KNN 50% KNN 100%
! 77.6 80.2
! 72.9 74.3
! 77.2 80.3
! ! 78.1 80.7
Table 9. These ablation results indicate that reconstructing both the
low resolution and high resolution components lead to robust perfor-
mance. Note: when the high resolution component is reconstructed,
the low-resolution residual is not used—the high resolution result is
directly reconstructed. The “Combined” entry merges the low and
high resolution results instead of treating them as separate losses.
The evaluations are a kNN classification ( k=20) on RESISC-45 at
relative GSDs 50% and 100% of its native GSD.
lutional feature maps. Existing work on scale-aware CNNs
can be extended to work with the Laplacian Decoder.
Evaluating on more remote sensing datasets. The field
of remote sensing has had a renaissance in the last five years
with the amount of available datasets. These can be generic,
like Functional Map of the World, to highly specific, such
as identifying illegal airstrips in Brazil [1, 8] or identifying
illegal fishing vessels [47]. In fact, there are so many small,
specific remote sensing datasets that entire review papers
are written to enumerate them [60]. We chose to focus
datasets with properties of remote sensing that are relevant
to multiscale representation learning.
6. Conclusion
Remote sensing imagery has accelerated the rate of scien-
tific discovery in a broad set of disciplines. With increasingly
precise methods to extract environmental indicators using
computer vision methods, automated understanding of re-
motely sensed sources has become a mainstay in scientific
literature. Remote sensing payloads are diverse and capture
data at a wide range of resolutions, a feature heavily utilized
by scientists. Current computer vision methods for remote
sensing necessitate the training of a new model per input
resolution. Not only is the training process expensive, but
the overhead of curating a dataset at multiples scales makesthis a daunting task.
We introduce Scale-MAE , a pretraining framework which
introduces scale invariance into encoders that are used
for a diverse set of downstream tasks. Our insights into
scale-inclusive positional encodings and progressive multi-
frequency feature extraction result in models that perform
significantly better than state-of-the-art pretraining methods
across (1) multiple scales and (2) many benchmarks.
Our goal is to take the extremely diverse and rich source
of information present in remote sensing imagery and make it
simple to use with minimal training iterations required. With
the introduction of Scale-MAE , we hope to further accelerate
the rate at which scientific disciplines create impact.
Acknowledgements
We deeply thank Kyle Michel from Meta for providing us