xinyu1205 commited on
Commit
47dec7d
1 Parent(s): 4a16ef8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +174 -3
README.md CHANGED
@@ -10,8 +10,179 @@ pinned: false
10
  ---
11
 
12
 
13
- Welcome to Tag2Text demo! (Fudan University, OPPO Research Institute, International Digital Economy Academy).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
14
 
15
- Upload your image to get the tags and caption of the image. Optional: You can also input specified tags to get the corresponding caption.
16
 
17
- We are constantly updating this demo.
 
10
  ---
11
 
12
 
13
+ # :label: Recognize Anything: A Strong Image Tagging Model & Tag2Text: Guiding Vision-Language Model via Image Tagging
14
+
15
+ Official PyTorch Implementation of the <a href="https://recognize-anything.github.io/">Recognize Anything Model (RAM)</a> and the <a href="https://tag2text.github.io/">Tag2Text Model</a>.
16
+
17
+ - RAM is a strong image tagging model, which can recognize any common category with high accuracy.
18
+ - Tag2Text is an efficient and controllable vision-language model with tagging guidance.
19
+
20
+
21
+ When combined with localization models ([Grounded-SAM](https://github.com/IDEA-Research/Grounded-Segment-Anything)), Tag2Text and RAM form a strong and general pipeline for visual semantic analysis.
22
+
23
+ ![](./images/ram_grounded_sam.jpg)
24
+
25
+ ## :sun_with_face: Helpful Tutorial
26
+
27
+
28
+ - :apple: [[Access RAM Homepage](https://recognize-anything.github.io/)]
29
+ - :grapes: [[Access Tag2Text Homepage](https://tag2text.github.io/)]
30
+ - :sunflower: [[Read RAM arXiv Paper](https://arxiv.org/abs/2306.03514)]
31
+ - :rose: [[Read Tag2Text arXiv Paper](https://arxiv.org/abs/2303.05657)]
32
+ - :mushroom: [[Try our Tag2Text web Demo! 🤗](https://huggingface.co/spaces/xinyu1205/Tag2Text)]
33
+
34
+
35
+
36
+ ## :bulb: Highlight
37
+ **Recognition and localization are two foundation computer vision tasks.**
38
+ - **The Segment Anything Model (SAM)** excels in **localization capabilities**, while it falls short when it comes to **recognition tasks**.
39
+ - **The Recognize Anything Model (RAM) and Tag2Text** exhibits **exceptional recognition abilities**, in terms of **both accuracy and scope**.
40
+
41
+ <p align="center">
42
+ <table class="tg">
43
+ <tr>
44
+ <td class="tg-c3ow"><img src="images/localization_and_recognition.jpg" align="center" width="800" ></td>
45
+ </tr>
46
+ </table>
47
+ </p>
48
+
49
+
50
+ <details close>
51
+ <summary><font size="4">
52
+ Tag2Text for Vision-Language Tasks.
53
+ </font></summary>
54
+
55
+ - **Tagging.** Without manual annotations, Tag2Text achieves **superior** image tag recognition ability of [**3,429**](./data/tag_list.txt) commonly human-used categories.
56
+ - **Efficient.** Tagging guidance effectively enhances the performance of vision-language models on both **generation-based** and **alignment-based** tasks.
57
+ - **Controllable.** Tag2Text permits users to input **desired tags**, providing the flexibility in composing corresponding texts based on the input tags.
58
+
59
+
60
+ <p align="center">
61
+ <table class="tg">
62
+ <tr>
63
+ <td class="tg-c3ow"><img src="images/tag2text_framework.png" align="center" width="800" ></td>
64
+ </tr>
65
+ </table>
66
+ </p>
67
+ </details>
68
+
69
+
70
+ <details close>
71
+ <summary><font size="4">
72
+ Advancements of RAM on Tag2Text.
73
+ </font></summary>
74
+
75
+ - **Accuracy.** RAM utilizes a data engine to generate additional annotations and clean incorrect ones, resulting higher accuracy compared to Tag2Text.
76
+ - **Scope.** Tag2Text recognizes 3,400+ fixed tags. RAM upgrades the number to 6,400+, covering more valuable categories. With open-set capability, RAM is feasible to recognize any common category.
77
+
78
+
79
+ </details>
80
+
81
+
82
+ ## :sparkles: Highlight Projects with other Models
83
+ - [Tag2Text/RAM with Grounded-SAM](https://github.com/IDEA-Research/Grounded-Segment-Anything) is trong and general pipeline for visual semantic analysis, which can automatically **recognize**, detect, and segment for an image!
84
+ - [Ask-Anything](https://github.com/OpenGVLab/Ask-Anything) is a multifunctional video question answering tool. Tag2Text provides powerful tagging and captioning capabilities as a fundamental component.
85
+ - [Prompt-can-anything](https://github.com/positive666/Prompt-Can-Anything) is a gradio web library that integrates SOTA multimodal large models, including Tag2text as the core model for graphic understanding
86
+
87
+
88
+
89
+
90
+ ## :fire: News
91
+
92
+ - **`2023/06/07`**: We release the [Recognize Anything Model (RAM)](https://recognize-anything.github.io/), a strong image tagging model!
93
+ - **`2023/06/05`**: Tag2Text is combined with [Prompt-can-anything](https://github.com/OpenGVLab/Ask-Anything).
94
+ - **`2023/05/20`**: Tag2Text is combined with [VideoChat](https://github.com/OpenGVLab/Ask-Anything).
95
+ - **`2023/04/20`**: We marry Tag2Text with with [Grounded-SAM](https://github.com/IDEA-Research/Grounded-Segment-Anything).
96
+ - **`2023/04/10`**: Code and checkpoint is available Now!
97
+ - **`2023/03/14`**: [Tag2Text web demo 🤗](https://huggingface.co/spaces/xinyu1205/Tag2Text) is available on Hugging Face Space!
98
+
99
+
100
+
101
+
102
+
103
+
104
+ ## :writing_hand: TODO
105
+
106
+ - [x] Release Tag2Text demo.
107
+ - [x] Release checkpoints.
108
+ - [x] Release inference code.
109
+ - [ ] Release RAM demo and checkpoints (until June 14th at the latest).
110
+ - [ ] Release training codes (until August 1st at the latest).
111
+ - [ ] Release training datasets (until August 1st at the latest).
112
+
113
+
114
+
115
+ ## :toolbox: Checkpoints
116
+
117
+ <!-- insert a table -->
118
+ <table>
119
+ <thead>
120
+ <tr style="text-align: right;">
121
+ <th></th>
122
+ <th>name</th>
123
+ <th>backbone</th>
124
+ <th>Data</th>
125
+ <th>Illustration</th>
126
+ <th>Checkpoint</th>
127
+ </tr>
128
+ </thead>
129
+ <tbody>
130
+ <tr>
131
+ <th>1</th>
132
+ <td>Tag2Text-Swin</td>
133
+ <td>Swin-Base</td>
134
+ <td>COCO, VG, SBU, CC-3M, CC-12M</td>
135
+ <td>Demo version with comprehensive captions.</td>
136
+ <td><a href="https://huggingface.co/spaces/xinyu1205/Tag2Text/blob/main/tag2text_swin_14m.pth">Download link</a></td>
137
+ </tr>
138
+ </tbody>
139
+ </table>
140
+
141
+
142
+ ## :running: Tag2Text Inference
143
+
144
+ 1. Install the dependencies, run:
145
+
146
+ <pre/>pip install -r requirements.txt</pre>
147
+
148
+ 2. Download Tag2Text pretrained checkpoints.
149
+
150
+ 3. Get the tagging and captioning results:
151
+ <pre/>
152
+ python inference.py --image images/1641173_2291260800.jpg \
153
+ --pretrained pretrained/tag2text_swin_14m.pth
154
+ </pre>
155
+ Or get the tagging and sepcifed captioning results (optional):
156
+ <pre/>python inference.py --image images/1641173_2291260800.jpg \
157
+ --pretrained pretrained/tag2text_swin_14m.pth \
158
+ --specified-tags "cloud,sky"</pre>
159
+
160
+
161
+ ## :black_nib: Citation
162
+ If you find our work to be useful for your research, please consider citing.
163
+
164
+ ```
165
+ @misc{zhang2023recognize,
166
+ title={Recognize Anything: A Strong Image Tagging Model},
167
+ author={Youcai Zhang and Xinyu Huang and Jinyu Ma and Zhaoyang Li and Zhaochuan Luo and Yanchun Xie and Yuzhuo Qin and Tong Luo and Yaqian Li and Shilong Liu and Yandong Guo and Lei Zhang},
168
+ year={2023},
169
+ eprint={2306.03514},
170
+ archivePrefix={arXiv},
171
+ primaryClass={cs.CV}
172
+ }
173
+
174
+ @article{huang2023tag2text,
175
+ title={Tag2Text: Guiding Vision-Language Model via Image Tagging},
176
+ author={Huang, Xinyu and Zhang, Youcai and Ma, Jinyu and Tian, Weiwei and Feng, Rui and Zhang, Yuejie and Li, Yaqian and Guo, Yandong and Zhang, Lei},
177
+ journal={arXiv preprint arXiv:2303.05657},
178
+ year={2023}
179
+ }
180
+ ```
181
+
182
+ ## :hearts: Acknowledgements
183
+ This work is done with the help of the amazing code base of [BLIP](https://github.com/salesforce/BLIP), thanks very much!
184
+
185
+ We also want to thank @Cheng Rui @Shilong Liu @Ren Tianhe for their help in [marrying Tag2Text with Grounded-SAM](https://github.com/IDEA-Research/Grounded-Segment-Anything).
186
+
187
 
 
188