Update README.md
Browse files
README.md
CHANGED
@@ -1,19 +1,72 @@
|
|
1 |
---
|
2 |
{}
|
3 |
---
|
4 |
-
|
|
|
|
|
|
|
5 |
|
6 |
Mike Ranzinger, Greg Heinrich, Jan Kautz, Pavlo Molchanov
|
7 |
|
|
|
|
|
|
|
|
|
|
|
8 |
[NVIDIA Research](https://www.nvidia.com/en-us/research/)
|
9 |
|
|
|
|
|
10 |
\[[Paper](https://arxiv.org/abs/2312.06709)\]\[[BibTex](#citing-radio)\]\[[GitHub examples](https://github.com/NVlabs/RADIO)\]
|
11 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
12 |
## Pretrained Models
|
13 |
|
14 |
Refer to `model_results.csv` for model versions and their metrics.
|
15 |
|
16 |
-
|
|
|
|
|
17 |
|
18 |
In order to pull the model from HuggingFace, you need to be logged in:
|
19 |
|
@@ -54,15 +107,32 @@ We have trained this model to be flexible in input dimension. It supports inputs
|
|
54 |
It is not required that $H=W$ although we have not specifically trained or testing the model in this setting.
|
55 |
|
56 |
|
57 |
-
|
58 |
|
59 |
-
|
60 |
|
61 |
-
|
|
|
|
|
|
|
|
|
|
|
62 |
|
63 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
64 |
|
65 |
-
|
|
|
66 |
|
67 |
If you find this repository useful, please consider giving a star and citation:
|
68 |
```
|
@@ -74,4 +144,7 @@ If you find this repository useful, please consider giving a star and citation:
|
|
74 |
year = {2024},
|
75 |
pages = {12490-12500}
|
76 |
}
|
77 |
-
```
|
|
|
|
|
|
|
|
1 |
---
|
2 |
{}
|
3 |
---
|
4 |
+
AM-RADIO: Reduce All Domains Into One
|
5 |
+
=====================================
|
6 |
+
|
7 |
+
# Model Overview
|
8 |
|
9 |
Mike Ranzinger, Greg Heinrich, Jan Kautz, Pavlo Molchanov
|
10 |
|
11 |
+
This model performs visual feature extraction.
|
12 |
+
For instance, RADIO generates image embeddings that can be used by a downstream model to classify images.
|
13 |
+
|
14 |
+
This model is for research and development only.
|
15 |
+
|
16 |
[NVIDIA Research](https://www.nvidia.com/en-us/research/)
|
17 |
|
18 |
+
## References
|
19 |
+
|
20 |
\[[Paper](https://arxiv.org/abs/2312.06709)\]\[[BibTex](#citing-radio)\]\[[GitHub examples](https://github.com/NVlabs/RADIO)\]
|
21 |
|
22 |
+
## Model Architecture:
|
23 |
+
**Architecture Type:** Neural Network <br>
|
24 |
+
**Network Architecture:** Vision Transformer <br>
|
25 |
+
|
26 |
+
### Input:
|
27 |
+
**Input Type(s):** Image <br>
|
28 |
+
**Input Format(s):** Red, Green, Blue (RGB) <br>
|
29 |
+
**Input Parameters:** Two Dimensional (2D) <br>
|
30 |
+
**Other Properties Related to Input:** Image resolutions up to 2048x2028 in increments of 16 pixels <br>
|
31 |
+
|
32 |
+
### Output:
|
33 |
+
**Output Type(s):** Embeddings <br>
|
34 |
+
**Output Format:** Tensor <br>
|
35 |
+
**Output Parameters:** 2D <br>
|
36 |
+
**Other Properties Related to Output:** Downstream model required to leverage image features <br>
|
37 |
+
|
38 |
+
### Software Integration:
|
39 |
+
**Runtime Engine(s):**
|
40 |
+
* TAO- 24.10 <br>
|
41 |
+
|
42 |
+
**Supported Hardware Microarchitecture Compatibility:** <br>
|
43 |
+
* NVIDIA Ampere <br>
|
44 |
+
* NVIDIA Blackwell <br>
|
45 |
+
* NVIDIA Jetson <br>
|
46 |
+
* NVIDIA Hopper <br>
|
47 |
+
* NVIDIA Lovelace <br>
|
48 |
+
* NVIDIA Pascal <br>
|
49 |
+
* NVIDIA Turing <br>
|
50 |
+
* NVIDIA Volta <br>
|
51 |
+
|
52 |
+
**[Preferred/Supported] Operating System(s):** <br>
|
53 |
+
* Linux
|
54 |
+
* Linux 4 Tegra
|
55 |
+
* QNX
|
56 |
+
* Windows
|
57 |
+
|
58 |
+
|
59 |
+
### License/Terms of Use
|
60 |
+
|
61 |
+
RADIO code and weights are released under the [NSCLv1 License](LICENSE).
|
62 |
+
|
63 |
## Pretrained Models
|
64 |
|
65 |
Refer to `model_results.csv` for model versions and their metrics.
|
66 |
|
67 |
+
**Link:** https://huggingface.co/collections/nvidia/radio-669f77f1dd6b153f007dd1c6
|
68 |
+
|
69 |
+
## HuggingFace Hub
|
70 |
|
71 |
In order to pull the model from HuggingFace, you need to be logged in:
|
72 |
|
|
|
107 |
It is not required that $H=W$ although we have not specifically trained or testing the model in this setting.
|
108 |
|
109 |
|
110 |
+
# Training, Testing, and Evaluation Datasets:
|
111 |
|
112 |
+
## Training Dataset:
|
113 |
|
114 |
+
**Link:** https://www.datacomp.ai/ <br>
|
115 |
+
** Data Collection Method by dataset <br>
|
116 |
+
* Automated <br>
|
117 |
+
** Labeling Method by dataset <br>
|
118 |
+
* Not Applicable (no labels are needed) <br>
|
119 |
+
**Properties (Quantity, Dataset Descriptions, Sensor(s)):** 12.8 billion diverse images gathered from the Internet using Common Crawl <br>
|
120 |
|
121 |
+
## Evaluation Dataset:
|
122 |
+
**Link:** [ImageNet](https://www.image-net.org/) <br>
|
123 |
+
** Data Collection Method by dataset <br>
|
124 |
+
* Automated <br>
|
125 |
+
** Labeling Method by dataset <br>
|
126 |
+
* Human <br>
|
127 |
+
|
128 |
+
**Properties (Quantity, Dataset Descriptions, Sensor(s)):** This dataset spans 1000 object classes and contains 1,281,167 training images, 50,000 validation images and 100,000 test images.<br>
|
129 |
+
|
130 |
+
## Inference:
|
131 |
+
**Engine:** PyTorch <br>
|
132 |
+
**Test Hardware:** A100 <br>
|
133 |
|
134 |
+
|
135 |
+
# Citing RADIO
|
136 |
|
137 |
If you find this repository useful, please consider giving a star and citation:
|
138 |
```
|
|
|
144 |
year = {2024},
|
145 |
pages = {12490-12500}
|
146 |
}
|
147 |
+
```
|
148 |
+
|
149 |
+
# Ethical Considerations (For NVIDIA Models Only):
|
150 |
+
NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
|