Datasets:

You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

Model Card for HEST-1k

Description

What is HEST-1k?

  • A collection of 1,229 spatial transcriptomic profiles, each linked and aligned to a Whole Slide Image (with pixel size > 1.15 µm/px) and metadata.
  • HEST-1k was assembled from 131 public and internal cohorts encompassing:
    • 26 organs
    • 2 species (Homo Sapiens and Mus Musculus)
    • 367 cancer samples from 25 cancer types.

HEST-1k processing enabled the identification of 1.5 million expression/morphology pairs and 76 million nuclei

Updates

  • 21.10.24: HEST has been accepted to NeurIPS 2024 as a Spotlight! We will be in Vancouver from Dec 10th to 15th. Send us a message if you wanna learn more about HEST (gjaume@bwh.harvard.edu).

  • 23.09.24: 121 new samples released, including 27 Xenium and 7 Visium HD! We also make the aligned Xenium transcripts + the aligned DAPI segmented cells/nuclei public.

  • 30.08.24: HEST-Benchmark results updated. Includes H-Optimus-0, Virchow 2, Virchow, and GigaPath. New COAD task based on 4 Xenium samples. HuggingFace bench data have been updated.

  • 28.08.24: New set of helpers for batch effect visualization and correction. Tutorial here.

Instructions for Setting Up HuggingFace Account and Token

1. Create an Account on HuggingFace

Follow the instructions provided on the HuggingFace sign-up page.

2. Accept terms of use of HEST

  1. On this page click request access (access will be automatically granted)
  2. At this stage, you can already manually inspect the data by navigating in the Files and version

3. Create a Hugging Face Token

  1. Go to Settings: Navigate to your profile settings by clicking on your profile picture in the top right corner and selecting Settings from the dropdown menu.

  2. Access Tokens: In the settings menu, find and click on Access tokens.

  3. Create New Token:

    • Click on New token.
    • Set the token name (e.g., hest).
    • Set the access level to Write.
    • Click on Create.
  4. Copy Token: After the token is created, copy it to your clipboard. You will need this token for authentication.

4. Logging

Run the following

pip install datasets
from huggingface_hub import login

login(token="YOUR HUGGINGFACE TOKEN")

Download the entire HEST-1k dataset:

import datasets

local_dir='hest_data' # hest will be dowloaded to this folder

# Note that the full dataset is around 1TB of data

dataset = datasets.load_dataset(
    'MahmoodLab/hest', 
    cache_dir=local_dir,
    patterns='*'
)

Download a subset of HEST-1k:

import datasets

local_dir='hest_data' # hest will be dowloaded to this folder

ids_to_query = ['TENX96', 'TENX99'] # list of ids to query

list_patterns = [f"*{id}[_.]**" for id in ids_to_query]
dataset = datasets.load_dataset(
    'MahmoodLab/hest', 
    cache_dir=local_dir,
    patterns=list_patterns
)

Query HEST by organ, techonology, oncotree code...

import datasets
import pandas as pd

local_dir='hest_data' # hest will be dowloaded to this folder

meta_df = pd.read_csv("hf://datasets/MahmoodLab/hest/HEST_v1_1_0.csv")

# Filter the dataframe by organ, oncotree code...
meta_df = meta_df[meta_df['oncotree_code'] == 'IDC']
meta_df = meta_df[meta_df['organ'] == 'Breast']

ids_to_query = meta_df['id'].values

list_patterns = [f"*{id}[_.]**" for id in ids_to_query]
dataset = datasets.load_dataset(
    'MahmoodLab/hest', 
    cache_dir=local_dir,
    patterns=list_patterns
)

Loading the data with the python library hest

Once downloaded, you can then easily iterate through the dataset:

from hest import iter_hest

for st in iter_hest('../hest_data', id_list=['TENX95']):
    print(st)

Please visit the github repo and the documentation for more information about the hest library API.

Data organization

For each sample:

  • wsis/: H&E stained Whole Slide Images in pyramidal Generic TIFF (or pyramidal Generic BigTIFF if >4.1GB)
  • st/: spatial transcriptomics expressions in a scanpy .h5ad object
  • metadata/: metadata
  • spatial_plots/: overlay of the WSI with the st spots
  • thumbnails/: downscaled version of the WSI
  • tissue_seg/: tissue segmentation masks:
    • {id}_mask.jpg: downscaled or full resolution greyscale tissue mask
    • {id}_mask.pkl: tissue/holes contours in a pickle file
    • {id}_vis.jpg: visualization of the tissue mask on the downscaled WSI
  • pixel_size_vis/: visualization of the pixel size
  • patches/: 256x256 H&E patches (0.5µm/px) extracted around ST spots in a .h5 object optimized for deep-learning. Each patch is matched to the corresponding ST profile (see st/) with a barcode.
  • patches_vis/: visualization of the mask and patches on a downscaled WSI.
  • cellvit_seg/: cellvit nuclei segmentation

For each xenium sample:

  • transcripts/: individual transcripts aligned to H&E for xenium samples; read with pandas.read_parquet; aligned coordinates in pixel are in columns ['he_x', 'he_y']
  • xenium_seg/: xenium segmentation on DAPI and aligned to H&E

How to cite:

@article{jaume2024hest,
    author = {Jaume, Guillaume and Doucet, Paul and Song, Andrew H. and Lu, Ming Y. and Almagro-Perez, Cristina and Wagner, Sophia J. and Vaidya, Anurag J. and Chen, Richard J. and Williamson, Drew F. K. and Kim, Ahrong and Mahmood, Faisal},
    title = {{HEST-1k: A Dataset for Spatial Transcriptomics and Histology Image Analysis}},
    journal = {arXiv},
    year = {2024},
    month = jun,
    eprint = {2406.16192},
    url = {https://arxiv.org/abs/2406.16192v1}
}

Contact:

  • Guillaume Jaume Harvard Medical School, Boston, Mahmood Lab (gjaume@bwh.harvard.edu)
  • Paul Doucet Harvard Medical School, Boston, Mahmood Lab (pdoucet@bwh.harvard.edu)

The dataset is distributed under the Attribution-NonCommercial-ShareAlike 4.0 International license (CC BY-NC-SA 4.0 Deed)

Downloads last month
9,929