image
image
annotation
image

AutoLamella Dataset

The autolamella dataset consists of images from multiple different lamella preparation methods. All data is annotated for semantic segmentation, and is available through the huggingface api at patrickcleeve/autolamella

Summary

Dataset / Method Train Test Total
Waffle 214 76 290
Liftout 801 163 969
Serial Liftout 301 109 412
Full 1316 348 1664

Details about the datasets can be found in summary.csv in the dataset directory.

Labels

Currently, the dataset is labelled for the following classes. In the future, we will add additional labels for objects such as ice contamination. If you would like to label this data, please see the labelling tools to get started.

CLASS_LABELS: # autolamella
  0: "background"
  1: "lamella"
  2: "manipulator"
  3: "landing_post"
  4: "copper_adaptor"
  5: "volume_block"

Download Datasets

To download datasets, you can use the huggingface api:


from datasets import load_dataset

# download waffle dataset
ds = load_dataset("patrickcleeve/autolamella", name="waffle")

# download liftout dataset
ds = load_dataset("patrickcleeve/autolamella", name="liftout")

# download serial-liftout dataset
ds = load_dataset("patrickcleeve/autolamella", name="serial-liftout")

# download test split only
ds = load_dataset("patrickcleeve/autolamella", name="waffle", split="test")

To display images and annotations:

# show random image image and annotation (training split) 
import random
import numpy as np
import matplotlib.pyplot as plt
from fibsem.segmentation.utils import decode_segmap_v2

# random data
idx = random.randint(0, len(ds["train"]))
image = np.asarray(ds["train"][idx]["image"])
mask = np.asarray(ds["train"][idx]["annotation"])

# metadata
split = ds["train"].split
config_name = ds["train"].config_name

plt.title(f"{config_name}-{split}-{idx:02d}")
plt.imshow(image, cmap="gray", alpha=0.7)
plt.imshow(decode_segmap_v2(mask), alpha=0.3)
plt.axis("off")
plt.show()
Waffle Liftout Serial Liftout
WaffleData LiftoutData LiftoutData

You can also concatenate the datasets together into a single dataset for easy combined training (e.g. mega models)

from datasets import load_dataset, concatenate_datasets

# load invidual datasets
waffle_train_ds = load_dataset("patrickcleeve/autolamella", name="waffle", split="train")
liftout_train_ds = load_dataset("patrickcleeve/autolamella", name="liftout", split="train")
serial_liftout_train_ds = load_dataset("patrickcleeve/autolamella", name="serial-liftout", split="train")

# concatenate datasets (e.g. mega model)
train_ds = concatenate_datasets([waffle_train_ds, liftout_train_ds, serial_liftout_train_ds])

print(train_ds)
Dataset({
    features: ['image', 'annotation'],
    num_rows: 1316
})

Acknowledgement

  • Waffle and Liftout data from Monash
  • Serial Liftout data from MPI
Downloads last month
0
Edit dataset card