Files changed (1) hide show
  1. MODELCARD.md +0 -128
MODELCARD.md DELETED
@@ -1,128 +0,0 @@
1
- ---
2
- library_name: transformers
3
- tags: []
4
- ---
5
-
6
- # Model Card for Phenom CA-MAE-S/16
7
-
8
- Channel-agnostic image encoding model designed for microscopy image featurization.
9
- The model uses a vision transformer backbone with channelwise cross-attention over patch tokens to create contextualized representations separately for each channel.
10
-
11
-
12
- ## Model Details
13
-
14
- ### Model Description
15
-
16
- This model is a [channel-agnostic masked autoencoder](https://openaccess.thecvf.com/content/CVPR2024/html/Kraus_Masked_Autoencoders_for_Microscopy_are_Scalable_Learners_of_Cellular_Biology_CVPR_2024_paper.html) trained to reconstruct microscopy images over three datasets:
17
- 1. RxRx3
18
- 2. JUMP-CP overexpression
19
- 3. JUMP-CP gene-knockouts
20
-
21
- - **Developed, funded, and shared by:** Recursion
22
- - **Model type:** Vision transformer CA-MAE
23
- - **Image modality:** Optimized for microscopy images from the CellPainting assay
24
- - **License:**
25
-
26
-
27
- ### Model Sources
28
-
29
- - **Repository:** [https://github.com/recursionpharma/maes_microscopy](https://github.com/recursionpharma/maes_microscopy)
30
- - **Paper:** [Masked Autoencoders for Microscopy are Scalable Learners of Cellular Biology](https://openaccess.thecvf.com/content/CVPR2024/html/Kraus_Masked_Autoencoders_for_Microscopy_are_Scalable_Learners_of_Cellular_Biology_CVPR_2024_paper.html)
31
-
32
-
33
- ## Uses
34
-
35
- NOTE: model embeddings tend to extract features only after using standard batch correction post-processing techniques. **We recommend**, at a *minimum*, after inferencing the model over your images, to do the standard `PCA-CenterScale` pattern or better yet Typical Variation Normalization:
36
-
37
- 1. Fit a PCA kernel on all the *control images* (or all images if no controls) from across all experimental batches (e.g. the plates of wells from your assay),
38
- 2. Transform all the embeddings with that PCA kernel,
39
- 3. For each experimental batch, fit a separate StandardScaler on the transformed embeddings of the controls from step 2, then transform the rest of the embeddings from that batch with that StandardScaler.
40
-
41
- ### Direct Use
42
-
43
- - Create biologically useful embeddings of microscopy images
44
- - Create contextualized embeddings of each channel of a microscopy image (set `return_channelwise_embeddings=True`)
45
- - Leverage the full MAE encoder + decoder to predict new channels / stains for images without all 6 CellPainting channels
46
-
47
- ### Downstream Use
48
-
49
- - A determined ML expert could fine-tune the encoder for downstream tasks such as classification
50
-
51
- ### Out-of-Scope Use
52
-
53
- - Unlikely to be especially performant on brightfield microscopy images
54
- - Out-of-domain medical images, such as H&E (maybe it would be a decent baseline though)
55
-
56
- ## Bias, Risks, and Limitations
57
-
58
- - Primary limitation is that the embeddings tend to be more useful at scale. For example, if you only have 1 plate of microscopy images, the embeddings might underperform compared to a supervised bespoke model.
59
-
60
- ## How to Get Started with the Model
61
-
62
- You should be able to successfully run the below tests, which demonstrate how to use the model at inference time.
63
-
64
- ```python
65
- import pytest
66
- import torch
67
-
68
- from huggingface_mae import MAEModel
69
-
70
- huggingface_phenombeta_model_dir = "."
71
- # huggingface_modelpath = "recursionpharma/test-pb-model"
72
-
73
-
74
- @pytest.fixture
75
- def huggingface_model():
76
- # Make sure you have the model/config downloaded from https://huggingface.co/recursionpharma/test-pb-model to this directory
77
- # huggingface-cli download recursionpharma/test-pb-model --local-dir=.
78
- huggingface_model = MAEModel.from_pretrained(huggingface_phenombeta_model_dir)
79
- huggingface_model.eval()
80
- return huggingface_model
81
-
82
-
83
- @pytest.mark.parametrize("C", [1, 4, 6, 11])
84
- @pytest.mark.parametrize("return_channelwise_embeddings", [True, False])
85
- def test_model_predict(huggingface_model, C, return_channelwise_embeddings):
86
- example_input_array = torch.randint(
87
- low=0,
88
- high=255,
89
- size=(2, C, 256, 256),
90
- dtype=torch.uint8,
91
- device=huggingface_model.device,
92
- )
93
- huggingface_model.return_channelwise_embeddings = return_channelwise_embeddings
94
- embeddings = huggingface_model.predict(example_input_array)
95
- expected_output_dim = 384 * C if return_channelwise_embeddings else 384
96
- assert embeddings.shape == (2, expected_output_dim)
97
- ```
98
-
99
-
100
- ## Training, evaluation and testing details
101
-
102
- See paper linked above for details on model training and evaluation. Primary hyperparameters are included in the repo linked above.
103
-
104
-
105
- ## Environmental Impact
106
-
107
- - **Hardware Type:** Nvidia H100 Hopper nodes
108
- - **Hours used:** 400
109
- - **Cloud Provider:** private cloud
110
- - **Carbon Emitted:** 138.24 kg co2 (roughly the equivalent of one car driving from Toronto to Montreal)
111
-
112
- **BibTeX:**
113
-
114
- ```TeX
115
- @inproceedings{kraus2024masked,
116
- title={Masked Autoencoders for Microscopy are Scalable Learners of Cellular Biology},
117
- author={Kraus, Oren and Kenyon-Dean, Kian and Saberian, Saber and Fallah, Maryam and McLean, Peter and Leung, Jess and Sharma, Vasudev and Khan, Ayla and Balakrishnan, Jia and Celik, Safiye and others},
118
- booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
119
- pages={11757--11768},
120
- year={2024}
121
- }
122
- ```
123
-
124
- ## Model Card Contact
125
-
126
- - Kian Kenyon-Dean: kian.kd@recursion.com
127
- - Oren Kraus: oren.kraus@recursion.com
128
- - Or, email: info@rxrx.ai