---
dataset_info:
features:
- name: image
dtype: image
- name: idx
dtype: int64
- name: label
dtype: string
- name: longitude
dtype: float64
- name: latitude
dtype: float64
- name: easting
dtype: float64
- name: northing
dtype: float64
- name: elevation
dtype: float64
- name: time
dtype: int64
- name: cluster
dtype: int64
configs:
- config_name: labelled
drop_labels: false
data_files:
- split: train
path:
- data/train/**/*.tif
- data/train/metadata.csv
- split: test
path:
- data/test/**/*.tif
- data/test/metadata.csv
- config_name: unlabelled
data_files:
- split: train
path:
- "data/orthomosaic/*.tif"
---
# Background
Leafy Spurge Dataset is a collection of top-down aerial images of grasslands in western Montana, USA. We surveyed a 150-hectare study area with a DJI Mavic 3M Drone from 50m above the ground surface and we assembled the images into a contiguous orthomosaic using Drone Deploy software. Many scenes in the study area contain a weed plant, leafy spurge (*Euphorbia esula*), which upsets the ecology of areas throughout North America. Botanists visited 1000 sites in the study area and gathered ground truth of leafy spurge presence/absence within 0.5 x 0.5 m plots. The position of these plots was referenced within the orthomosaic and these areas were cropped from the larger image. The resulting processed data are 1024 x 1024 pixel .tif files, though note the labelled areas correspond to the 39 x 39 pixel square (half-meter side length) found at the center of these crops. We include the context around the ground truth areas for experimental purposes. Our primary objective in serving these data is to invite the research community to develop classifiers that are effective early warning systems of spurge invasion at the highest spatial resolution possible.
[Please refer to our data release paper on Arxiv for further details.](https://arxiv.org)
# Data loading and pre-processing
As a Hugging Face dataset, you may load the Leafy Spurge training set as follows:
```python
from datasets import load_dataset
ds = load_dataset('mpg-ranch/leafy_spurge', 'labelled', split='train')
ds['image'][405]
```
We will now center crop the image to the size of the ground truth:
```python
from torchvision.transforms import CenterCrop, Compose
ground_truth_sz = 39
ccrop = Compose([CenterCrop(ground_truth_sz)])
def preproc_transforms(examples):
examples["pixel_values"] = [ccrop(image.convert("RGB")) for image in examples["image"]]
return examples
ds = ds.map(preproc_transforms, batched=True)
ds['pixel_values'][405]
```
# Geographic splits within the training set
We gathered ground truth at multiple sites and observations within a site were geographically clustered. We suggest using the cluster feature to establish holdout sets for cross-validated hyperparameter tuning. This will simluate model performance when classifying leafy spurge at new sites (such as those of the test set). You can filter by cluster metadata as follows:
```python
#define holdout sets with ground truth clusters; 6 and 7 overlap geographically
holdout_sets = [[0], [1], [2], [4], [5], [6,7], [8]]
set_0 = ds.filter(lambda example: example['cluster'] in holdout_sets[0])
unq_vals = list(set(set_0['cluster']))
print(f'Unique cluster values in set 0: {unq_vals}')
```
# Example cross-validation loop
We will use the the geographic cluster feature to cross-validate performance. First let's reformat the dataset for torch and define some functions for our training loop:
```python
import torch.nn as nn
import torch.optim as optim
from torchvision.models import resnet50
from tqdm import tqdm
import pandas as pd
ds = ds.with_format("torch")
def train_one_epoch(model, train_loader, criterion, optimizer, device):
model.train()
running_loss = 0.0
correct_predictions = 0
total_predictions = 0
for i, batch in enumerate(train_loader):
inputs = batch['pixel_values'].permute(0,3,1,2).float().to(device)
labels = batch['label'].to(device)
optimizer.zero_grad()
outputs = model(inputs)
_, predicted = torch.max(outputs.data, 1)
total_predictions += labels.size(0)
correct_predictions += (predicted == labels).sum().item()
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
running_loss += loss.item()
train_accuracy = correct_predictions / total_predictions
train_loss = running_loss / len(train_loader)
return train_loss, train_accuracy
def evaluate_one_epoch(model, test_loader, criterion, device):
model.eval()
running_loss = 0.0
correct_predictions = 0
total_predictions = 0
with torch.no_grad():
for batch in test_loader:
inputs = batch['pixel_values'].permute(0,3,1,2).float().to(device)
labels = batch['label'].to(device)
outputs = model(inputs)
_, predicted = torch.max(outputs.data, 1)
total_predictions += labels.size(0)
correct_predictions += (predicted == labels).sum().item()
loss = criterion(outputs, labels)
running_loss += loss.item()
test_accuracy = correct_predictions / total_predictions
test_loss = running_loss / len(test_loader)
return test_loss, test_accuracy
def cross_val(ds, holdout_set):
train = ds.filter(lambda example: example['cluster'] not in holdout_set)
test = ds.filter(lambda example: example['cluster'] in holdout_set)
model = resnet50(pretrained=True)
num_classes = len(ds['label'].unique())
model.fc = nn.Linear(2048, num_classes)
# Define the loss function and optimizer
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters(), lr=0.001)
# Define the data loaders
train_loader = torch.utils.data.DataLoader(train, batch_size=32, shuffle=True)
test_loader = torch.utils.data.DataLoader(test, batch_size=32, shuffle=False)
# Train the model
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.to(device)
results = []
for epoch in range(5):
train_loss, train_accuracy = train_one_epoch(model, train_loader, criterion, optimizer, device)
test_loss, test_accuracy = evaluate_one_epoch(model, test_loader, criterion, device)
results.append({
'epoch': epoch + 1,
'train_loss': train_loss,
'train_accuracy': train_accuracy,
'test_loss': test_loss,
'test_accuracy': test_accuracy,
'holdout_set': holdout_set
})
results_df = pd.DataFrame(results)
return results_df
```
Next we'll sequentially holdout geographic clusters and store performance:
```python
results = []
pbar_holdout = tqdm(holdout_sets, desc="Holdout Sets")
for holdout_set in pbar_holdout:
results.append(cross_val(ds, holdout_set))
pbar_holdout.set_postfix_str(f"Completed holdout set {holdout_set}")
results_df = pd.concat(results)
```
Finally, we plot the results of geographic cross-validation:
```python
import numpy as np
import matplotlib.pyplot as plt
# Group the results by epoch
grouped_results = results_df.groupby('epoch')
# Compute the mean and standard deviation of the test accuracy at each epoch
mean_test_accuracy = grouped_results['test_accuracy'].mean()
std_test_accuracy = grouped_results['test_accuracy'].std()
# Compute the 68% confidence interval
lower_bound = mean_test_accuracy - std_test_accuracy
upper_bound = mean_test_accuracy + std_test_accuracy
# Plot the mean test accuracy
plt.plot(mean_test_accuracy.index, mean_test_accuracy)
# Plot the error ribbon
plt.fill_between(lower_bound.index, lower_bound, upper_bound, color='b', alpha=.1)
# Set the labels and title
plt.xlabel('Epoch')
plt.ylabel('Cross-validated Accuracy')
# Show the plot
plt.show()
```