CONCH embeddings for EBRAINS dataset
CONCH is a vision-language foundation model created by the Mahmood Lab. We can use CONCH to embed patches from whole slide images.
In this dataset, we have embedded the EBRAINS whole slide images with IDH mutation status (n=873 slides). See this link for the EBRAINS dataset: https://doi.org/10.25493/WQ48-ZGX .
The vision-only directory contains embeddings with proj_contrast=False
, and the vision-language
directory contains embeddings with proj_contrast=True
.
Install
Create a conda environment with the required libraries.
conda create -n conch \
python pytorch torchvision pytorch-cuda=11.8 transformers numpy \
openslide openslide-python scikit-learn timm regex ftfy h5py pandas \
-c pytorch -c conda-forge
conda activate conch
python -m pip install git+https://github.com/mahmoodlab/CONCH
Get patch coordinates
Use CLAM to extract patch coordinates from whole slide images. Jakub has modified CLAM to extract patches of a constant physical size, but he still has to upload that code to GitHub.
Embed patches
Log in to HuggingFace Hub with the huggingface_hub
command line interface or the Python packge. Make sure you request access to CONCH with your HuggingFace Hub account.
Embed the patches using the script extract_features.py
. These embeddings are suitable for vision-only models, like an ABMIL model to infer a slide-level label.
python extract_features.py \
--wsi-dir data/slides \
--patch-dir data/patches \
--wsi-extension '.ndpi' \
--save-dir data/embeddings \
If you want to get the similarity of patches to text prompts, then also include the --proj-contrast
argument.
python extract_features.py \
--wsi-dir data/slides \
--patch-dir data/patches \
--wsi-extension '.ndpi' \
--proj-contrast \
--save-dir data/embeddings-projcontrast
- Downloads last month
- 33