Datasets:

Modalities:
Image
Languages:
English
ArXiv:
DOI:
Libraries:
Datasets
License:
File size: 8,636 Bytes
d83e640
 
 
 
 
4a42423
 
de12fd2
 
4a42423
 
 
 
 
 
 
 
 
 
 
 
 
 
d83e640
de12fd2
 
d83e640
 
de12fd2
 
 
8ba9723
de12fd2
 
 
 
 
 
 
 
98e0999
54f95d0
 
728fd1e
98e0999
709dc99
 
02eae0f
cf12b0d
 
 
b49e612
cf12b0d
 
 
 
026a75a
cf12b0d
00bd029
cf12b0d
 
 
 
 
 
 
 
 
 
 
 
 
 
938f998
026a75a
cf12b0d
00bd029
598cc28
481b541
ed59be2
481b541
cbfcc00
481b541
 
21c9f6e
b97beef
481b541
 
f199003
 
 
 
 
a065502
f199003
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
---
dataset_info:
  features:
  - name: image
    dtype: image
  - name: idx
    dtype: int64
  - name: label
    dtype: string
  - name: longitude
    dtype: float64
  - name: latitude
    dtype: float64
  - name: easting
    dtype: float64
  - name: northing
    dtype: float64
  - name: elevation
    dtype: float64
  - name: time
    dtype: int64
  - name: cluster
    dtype: int64
configs:
- config_name: labelled
  drop_labels: false
  data_files:
  - split: train
    path:
    - data/train/**/*.tif
    - data/train/metadata.csv
  - split: test
    path:
    - data/test/**/*.tif
    - data/test/metadata.csv
- config_name: unlabelled
  data_files:
  - split: train
    path:
    - "data/orthomosaic/*.tif"
---
<img src="https://huggingface.co/datasets/mpg-ranch/leafy_spurge/resolve/main/doc_figures/spurge_photo_2_panel.png" width="100%">

# Background

Leafy Spurge Dataset is a collection of top-down aerial images of grasslands in western Montana, USA. We surveyed a 150-hectare study area with a DJI Mavic 3M Drone from 50m above the ground surface and we assembled the images into a contiguous orthomosaic using Drone Deploy software. Many scenes in the study area contain a weed plant, leafy spurge (*Euphorbia esula*), which upsets the ecology of areas throughout North America. Botanists visited 1000 sites in the study area and gathered ground truth of leafy spurge presence/absence within 0.5 x 0.5 m plots. The position of these plots was referenced within the orthomosaic and these areas were cropped from the larger image. The resulting processed data are 1024 x 1024 pixel .tif files, though note the labelled areas correspond to the 39 x 39 pixel square (half-meter side length) found at the center of these crops. We include the context around the ground truth areas for experimental purposes. Our primary objective in serving these data is to invite the research community to develop classifiers that are effective early warning systems of spurge invasion at the highest spatial resolution possible. 

[Please refer to our data release paper on Arxiv for further details.](https://arxiv.org)

# Data loading and pre-processing

As a Hugging Face dataset, you may load the Leafy Spurge training set as follows:
```python 
from datasets import load_dataset

ds = load_dataset('mpg-ranch/leafy_spurge', 'labelled', split='train')
ds['image'][405]
```
<img src="https://huggingface.co/datasets/mpg-ranch/leafy_spurge/resolve/main/doc_figures/full_size_tile.png" width="1024px" height="1024px">

We will now center crop the image to the size of the ground truth:

```python
from torchvision.transforms import CenterCrop, Compose

ground_truth_sz = 39

ccrop = Compose([CenterCrop(ground_truth_sz)])

def preproc_transforms(examples):
    examples["pixel_values"] = [ccrop(image.convert("RGB")) for image in examples["image"]]
    return examples

ds = ds.map(preproc_transforms, batched=True)
ds['pixel_values'][405]
```
<img src="https://huggingface.co/datasets/mpg-ranch/leafy_spurge/resolve/main/doc_figures/ground_truth_tile.png" width="39px" height="39px">

# Geographic splits within the training set
<img src="https://huggingface.co/datasets/mpg-ranch/leafy_spurge/resolve/main/doc_figures/train_clusters.png" width="75%" height="75%">

We gathered ground truth at multiple sites and observations within a site were geographically clustered. We suggest using the cluster feature to establish holdout sets for cross-validated hyperparameter tuning. This will simluate model performance when classifying leafy spurge at new sites (such as those of the test set). You can filter by cluster metadata as follows:

```python
 #define holdout sets with ground truth clusters; 6 and 7 overlap geographically
holdout_sets = [[0], [1], [2], [4], [5], [6,7], [8]] 

set_0 = ds.filter(lambda example: example['cluster'] in holdout_sets[0])
unq_vals = list(set(set_0['cluster']))
print(f'Unique cluster values in set 0: {unq_vals}')
```

# Example cross-validation loop
We will use the the geographic cluster feature to cross-validate performance. First let's reformat the dataset for torch and define some functions for our training loop:

```python
import torch.nn as nn
import torch.optim as optim
from torchvision.models import resnet50
from tqdm import tqdm
import pandas as pd

ds = ds.with_format("torch")

def train_one_epoch(model, train_loader, criterion, optimizer, device):
    model.train()
    running_loss = 0.0
    correct_predictions = 0
    total_predictions = 0
    for i, batch in enumerate(train_loader):
        inputs = batch['pixel_values'].permute(0,3,1,2).float().to(device)
        labels = batch['label'].to(device)

        optimizer.zero_grad()

        outputs = model(inputs)
        _, predicted = torch.max(outputs.data, 1)
        total_predictions += labels.size(0)
        correct_predictions += (predicted == labels).sum().item()

        loss = criterion(outputs, labels)
        loss.backward()
        optimizer.step()

        running_loss += loss.item()

    train_accuracy = correct_predictions / total_predictions
    train_loss = running_loss / len(train_loader)

    return train_loss, train_accuracy

def evaluate_one_epoch(model, test_loader, criterion, device):
    model.eval()
    running_loss = 0.0
    correct_predictions = 0
    total_predictions = 0
    with torch.no_grad():
        for batch in test_loader:
            inputs = batch['pixel_values'].permute(0,3,1,2).float().to(device)
            labels = batch['label'].to(device)

            outputs = model(inputs)
            _, predicted = torch.max(outputs.data, 1)
            total_predictions += labels.size(0)
            correct_predictions += (predicted == labels).sum().item()

            loss = criterion(outputs, labels)
            running_loss += loss.item()

    test_accuracy = correct_predictions / total_predictions
    test_loss = running_loss / len(test_loader)

    return test_loss, test_accuracy

def cross_val(ds, holdout_set):
    train = ds.filter(lambda example: example['cluster'] not in holdout_set)
    test = ds.filter(lambda example: example['cluster'] in holdout_set)

    model = resnet50(pretrained=True)
    num_classes = len(ds['label'].unique())
    model.fc = nn.Linear(2048, num_classes)

    # Define the loss function and optimizer
    criterion = nn.CrossEntropyLoss()
    optimizer = optim.Adam(model.parameters(), lr=0.001)

    # Define the data loaders
    train_loader = torch.utils.data.DataLoader(train, batch_size=32, shuffle=True)
    test_loader = torch.utils.data.DataLoader(test, batch_size=32, shuffle=False)

    # Train the model
    device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
    model.to(device)

    results = []

    for epoch in range(5):
        train_loss, train_accuracy = train_one_epoch(model, train_loader, criterion, optimizer, device)
        test_loss, test_accuracy = evaluate_one_epoch(model, test_loader, criterion, device)

        results.append({
            'epoch': epoch + 1,
            'train_loss': train_loss,
            'train_accuracy': train_accuracy,
            'test_loss': test_loss,
            'test_accuracy': test_accuracy,
            'holdout_set': holdout_set
        })
    
    results_df = pd.DataFrame(results)

    return results_df

```

Next we'll sequentially holdout geographic clusters and store performance:
```python
results = []
pbar_holdout = tqdm(holdout_sets, desc="Holdout Sets")
for holdout_set in pbar_holdout:
    results.append(cross_val(ds, holdout_set))
    pbar_holdout.set_postfix_str(f"Completed holdout set {holdout_set}")

results_df = pd.concat(results)
```

Finally, we plot the results of geographic cross-validation:
```python
import numpy as np
import matplotlib.pyplot as plt

# Group the results by epoch
grouped_results = results_df.groupby('epoch')

# Compute the mean and standard deviation of the test accuracy at each epoch
mean_test_accuracy = grouped_results['test_accuracy'].mean()
std_test_accuracy = grouped_results['test_accuracy'].std()

# Compute the 68% confidence interval
lower_bound = mean_test_accuracy - std_test_accuracy
upper_bound = mean_test_accuracy + std_test_accuracy

# Plot the mean test accuracy
plt.plot(mean_test_accuracy.index, mean_test_accuracy)

# Plot the error ribbon
plt.fill_between(lower_bound.index, lower_bound, upper_bound, color='b', alpha=.1)

# Set the labels and title
plt.xlabel('Epoch')
plt.ylabel('Cross-validated Accuracy')

# Show the plot
plt.show()
```
<img src="https://huggingface.co/datasets/mpg-ranch/leafy_spurge/resolve/main/doc_figures/cluster_cv_fig.png" width="75%" height="75%">