|
--- |
|
license: mit |
|
library_name: pytorch |
|
tags: |
|
- biology |
|
- microscopy |
|
- text-to-image |
|
- transformers |
|
metrics: |
|
- accuracy |
|
--- |
|
[![Huang Lab](images/huanglogo.jpeg)](huanglab.ucsf.edu) |
|
|
|
# CELL-E 2 |
|
|
|
## Model description |
|
[![CELL-E_2](images/architecture.png)](https://bohuanglab.github.io/CELL-E_2/) |
|
|
|
CELL-E 2 is the second iteration of the original [CELL-E](https://www.biorxiv.org/content/10.1101/2022.05.27.493774v1) model which utilizes an amino acid sequence and nucleus image to make predictions of subcellular protein localization with respect to the nucleus. |
|
|
|
CELL-E 2 is novel bidirectional transformer that can generate images depicting protein subcellular localization from the amino acid sequences (and *vice versa*). |
|
CELL-E 2 not only captures the spatial complexity of protein localization and produce probability estimates of localization atop a nucleus image, but also being able to generate sequences from images, enabling *de novo* protein design. |
|
We trained on the [Human Protein Atlas](https://www.proteinatlas.org) (HPA) and the [OpenCell](https://opencell.czbiohub.org) datasets. |
|
|
|
CELL-E 2 utilizes pretrained amino acid embeddings from [ESM-2](https://github.com/facebookresearch/esm). |
|
Localization is predicted as a binary image atop the provided nucleus. The logit values are weighted against these binary images to produce a heatmap of expected localization. |
|
|
|
## Spaces |
|
|
|
We have two spaces available where you can run predictions on your own data! |
|
|
|
- [Image Prediction](https://huggingface.co/spaces/HuangLab/CELL-E_2-Image_Prediction) |
|
- [Sequence Prediction](https://huggingface.co/spaces/HuangLab/CELL-E_2-Sequence_Prediction) |
|
|
|
## Model variations |
|
|
|
We have made several versions of CELL-E 2 available. The naming scheme follows the structure ```training set_hidden size``` where the hidden size is set to the embedding dimension of the pretrained ESM-2 model. |
|
We annotate the most useful models under Notes, however other models can be used if memory constraints are present. |
|
Since these models share similarities with BERT, the embeddings from any of these models may be benefical for downstream tasks. |
|
|
|
**HPA Models**: |
|
HPA models are trained on the HPA dataset. They are best for general purpose predictions as they include a variety of cell types. |
|
|
|
| Model | Size | Notes |
|
|------------------------|--------------------------------|-------| |
|
| [`HPA_480`](https://huggingface.co/HuangLab/CELL-E_2_HPA_480) | 4.73 GB | **Best for Image Prediction** | |
|
| [`HPA_640`](https://huggingface.co/HuangLab/CELL-E_2_HPA_640) | 6.31 GB | | |
|
| [`HPA_1280`](https://huggingface.co/HuangLab/CELL-E_2_HPA_1280) | 10.8 GB | | |
|
| [`HPA_2560`](https://huggingface.co/HuangLab/CELL-E_2_HPA_2560) | 17.5 GB | **Best for Sequence Prediction** | |
|
|
|
**OpenCell Models**: |
|
OpenCell models are trained on the OpenCell dataset. These only contain HEK cells and should ideally only be used for predictions on HEK cells. They perform well on image prediction but the generate heatmaps contain little information. |
|
|
|
| Model | Size | Notes |
|
|------------------------|--------------------------------|-------| |
|
| [`OpenCell_480`](https://huggingface.co/HuangLab/CELL-E_2_OpenCell_480) | 4.73 GB | | |
|
| [`OpenCell_640`](https://huggingface.co/HuangLab/CELL-E_2_OpenCell_640) | 6.31 GB | | |
|
| [`OpenCell_1280`](https://huggingface.co/HuangLab/CELL-E_2_OpenCel_1280) | 10.8 GB | | |
|
| [`OpenCell_2560`](https://huggingface.co/HuangLab/CELL-E_2_OpenCell_2560) | 17.5 GB | **Best for Sequence Prediction** | |
|
|
|
**Finetuned HPA Models**: |
|
These models were used the HPA models as checkpoints, but then were finetuned on the OpenCell dataset. We found that they improve image generation capabilities, but did not necessary see an improvement in sequence prediction. |
|
|
|
| Model | Size | Notes |
|
|------------------------|--------------------------------|-------| |
|
| [`HPA_480`](https://huggingface.co/HuangLab/CELL-E_2_HPA_Finetuned_480) | 4.73 GB | **Best for Image Prediction** | |
|
| [`HPA_640`](https://huggingface.co/HuangLab/CELL-E_2_HPA_Finetuned_640) | 6.31 GB | | |
|
| [`HPA_1280`](https://huggingface.co/HuangLab/CELL-E_2_HPA_Finetuned_1280) | 10.8 GB | | |
|
| [`HPA_2560`](https://huggingface.co/HuangLab/CELL-E_2_HPA_Finetuned_2560) | 17.5 GB | | |
|
|
|
To reduce download size, we removed the ESM-2 model from the checkpoint. This should be downloaded the first time the code is run, but is otherwise something to be aware of if loading into other projects. |
|
|
|
### How to use |
|
|
|
The full codebase is available on [GitHub](https://github.com/BoHuangLab/CELL-E_2). |
|
Download the model and make sure ```nuclues_vqgan.yaml```, ```threshold_vqgan.yaml```, ```config.yaml```, and ```model.ckpt``` are present. |
|
``` |
|
Here is how to use this model to do sequence prediction: |
|
|
|
```python |
|
configs = OmegaConf.load(configs/config.yaml); |
|
model = instantiate_from_config(configs.model).to(device); |
|
model.sample(text=sequence, condition=nucleus) |
|
``` |
|
|
|
### BibTeX entry and citation info |
|
|
|
```bibtex |
|
@inproceedings{ |
|
anonymous2023translating, |
|
title={CELL-E 2: Translating Proteins to Pictures and Back with a Bidirectional Text-to-Image Transformer}, |
|
author={Emaad Khwaja, Yun S. Song, Aaron Agarunov, and Bo Huang}, |
|
booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, |
|
year={2023}, |
|
url={https://openreview.net/forum?id=YSMLVffl5u} |
|
} |
|
``` |
|
|
|
### Contact |
|
|
|
We are an interdisciplinary lab based at [UCSF](https://www.ucsf.edu). We are particularly seeking talents in optical engineering, machine learning, and cellular microscopy. [Please reach out to Bo if you're interested in collaborating!](http://huanglab.ucsf.edu/Contact.html) |