text stringlengths 0 820 |
|---|
zoom-in to visualize the details of the pictures. |
Figure 4: Left The histogram of number of views. Right the |
histogram of standard deviation in years per area in fMoW |
dataset. |
studies utilizes geo-location of an image as a prior to im- |
prove image recognition accuracy [33, 14, 24, 20, 6]. Other |
studies [44, 11, 12, 42] use geo-tagged training datasets to |
learn how to predict the geo-location of previously unseen |
images at test time. In our study, we leverage geo-tag infor- |
mation to improve unsupervised and self-supervised learn- |
ing methods. |
3. Problem Definition |
We consider a geo-tagged visual dataset |
{((x1 |
i,···,xTi |
i),lati,loni)}N |
i=1, where the ith data-point consists of a sequence of images Xi= (x1 |
i,···,xTi |
i) |
at a shared location, with latitude and longitude equal |
to lati,lonirespectively, over time ti= 1,...,Ti. When |
Ti>1, we refer to the dataset to have temporal information |
or structure. Although temporal information is often not |
available in natural image datasets ( e.g. ImageNet), it is |
common in remote sensing. While the temporal structure is |
similar to that of conventional videos, there are some key |
differences that we exploit in this work. First, we consider |
relatively short temporal sequences, where the time differ- |
ence between two consecutive “frames” could range from |
months to years. Additionally unlike conventional videos |
we consider datasets where there is no viewpoint change |
across the image sequence. |
Given our setup, we want to obtain visual representa- |
tionszti |
iof imagesxti |
isuch that the learned representation |
can be transferred to various downstream tasks. We do not |
assume access to any labels or human supervision beyond |
the lati,lonigeo-tags. The quality of the representations |
is measured by their performance on various downstream |
tasks. Our primary goal is to improve the performance |
of self-supervised learning by utilizing the geo-coordinates |
and the unique temporal structure of remote sensing data. |
3 |
3.1. Functional Map of the World |
Functional Map of the World (fMoW) is a large-scale |
publicly available remote sensing dataset [5] consisting of |
approximately 363,571 training images and 53,041 test im- |
ages across 62 highly granular class categories. It provides |
images (temporal views) from the same location over time |
(x1 |
i,···,xTi |
i)as well as geo-location metadata (lati,loni) |
for each image. Fig. 4 shows the histogram of the number |
of temporal views in fMoW dataset. We can see that most of |
the areas have multiple temporal views where Tican range |
from 1 to 21, and on average there is about 2.5-3 years of |
difference between the images from an area. Also, we show |
examples of spatially aligned images in Fig. 2. As seen in |
Fig. 5, fMoW is a global dataset consisting of images from |
seven continents which can be ideal for learning global re- |
mote sensing representations. |
3.2. GeoImageNet |
Following [7], we extract geo-coordinates for a sub- |
set of images in ImageNet [8] using the FLICKR API. |
More specifically, we searched for geo-tagged images |
in ImageNet using the FLICKR API, and were able to |
find 543,435 images with their associated coordinates |
(lati,loni)across 5150 class categories. This dataset is |
more challenging than ImageNet-1k as it is highly imbal- |
anced and contains about 5 ×more classes. In the rest of |
the paper, we refer to this geo-tagged subset of ImageNet as |
GeoImageNet . |
We show some examples from GeoImageNet in Fig. 3. |
As shown in the figure, for some images we have geo- |
coordinates that can be predicted from visual cues. For ex- |
ample, we see that a picture of a person with a Sombrero |
hat was captured in Mexico. Similarly, an Indian Elephant |
picture was captured in India, where there is a large popula- |
tion of Indian Elephants. Next to it, we show the picture of |
an African Elephant (which is larger in size). If a model is |
trained to predict where in the world the image was taken, |
it should be able to identify visual cues that are transferable |
to other tasks (e.g., visual cues to differentiate Indian Ele- |
phants from the African counterparts). Figure 5 shows the |
distribution of images in the GeoImageNet dataset. |
4. Method |
In this section, we briefly review contrastive loss func- |
tions for unsupervised learning and detail our proposed ap- |
proach to improve Moco-v2 [3], a recent contrastive learn- |
ing framework, on geo-located data. |
4.1. Contrastive Learning Framework |
Contrastive [13, 2, 3, 34, 27] methods attempt to learn |
a mappingfq:xt |
i↦→zt |
i∈Rdfrom raw pixels xt |
ito |
semantically meaningful representations zt |
iin an unsuper- |
Figure 5: Topshows the distribution of the fMoW and Bot- |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.