File size: 6,855 Bytes
5f082be
 
 
 
bab98fc
 
 
 
 
 
 
 
5f082be
bab98fc
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7885234
bab98fc
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5f082be
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
---
language:
- en
---
# VeCLIP: Improving CLIP Training via Visual-enriched Captions

* A novel CLIP training scheme that achieves the SoTA performance on zero-shot ImageNet classification and COCO image text retreival using limited visual-enriched captions.* [[Paper](https://arxiv.org/abs/2310.07699)]

[Zhengfeng Lai*](https://zjujefflai.github.io/), [Haotian Zhang*](https://haotian-zhang.github.io/) , [Bowen Zhang](https://zbwglory.github.io/), Wentao Wu, Haoping Bai, Aleksei Timofeev, Xianzhi Du, [Zhe Gan](https://zhegan27.github.io/), Jiulong Shan, [Chen-Nee Chuah](https://www.ece.ucdavis.edu/~chuah/rubinet/people/chuah/bio.html), Yinfei Yang, Meng Cao [*: equal contribution]


<p align="center">
    <img src="veclip_diagram.jpg" width="100%"></a> <br>
    Diagram of VeCap.
</p>

## Release
- [03/06/2024] 🔥 We released the VeCLIP & VeCap-DFN [checkpoints](#checkpoints).

## Contents
- [Install](#install)
- [Getting Started](#getting-started)
- [Checkpoints](#checkpoints)

## Install

1. Clone this repository
```Shell
git clone https://github.com/apple/ml-veclip
cd ml-veclip
```

2. Create an environment and install related packages
```Shell
conda create -n veclip python=3.9 -y
conda activate veclip
pip install -r requirements.txt
```

## Getting Started

See the [example notebook](https://github.com/apple/ml-veclip/blob/main/load_veclip.ipynb) for details on how to simply load the different checkpoints using HuggingFace transformers.


## Checkpoints

We release the checkpoints for **VeCLIP**, which are trained from scratch on visual-enriched captions VeCap 3M/12M/100M/200M, as reported in the paper. The models are evaluated on COCO/Flickr30k image-text retrieval and ImageNet/ImageNetv2 classification in a zero-shot fashion. Use `wget` or `curl` to download the below checkpoints. 

<table>
<thead>
  <tr>
    <th rowspan="2">Data</th>
    <th rowspan="2">Model</th>
    <th rowspan="2">Resolution</th>
    <th colspan="2">COCO (R@1)</th>
    <th colspan="2">Flickr30k (R@1)</th>
    <th rowspan="2">ImageNet</th>
    <th rowspan="2">ImageNetv2</th>
  </tr>
  <tr>
    <th>I2T</th>
    <th>T2I</th>
    <th>I2T</th>
    <th>T2I</th>
  </tr>
</thead>
<tbody>
  <tr>
    <td rowspan="2">VeCap 3M</td>
    <td>CLIP-B/16</td>
    <td>224x224</td>
    <td>5.46</td>
    <td>3.28</td>
    <td>12.20</td>
    <td>6.36</td>
    <td>5.46</td>
    <td>7.09</td>
  </tr>
  <tr>
    <td><a href="https://docs-assets.developer.apple.com/ml-research/models/veclip/veclip_b16_3m.zip">VeCLIP-B/16</a></td>
    <td>224x224</td>
    <td>22.30</td>
    <td>13.01</td>
    <td>40.60</td>
    <td>27.58</td>
    <td>15.98</td>
    <td>13.51</td>
  </tr>
  <tr>
    <td rowspan="2">VeCap 12M</td>
    <td>CLIP-B/16</td>
    <td>224x224</td>
    <td>24.52</td>
    <td>14.28</td>
    <td>44.70</td>
    <td>290.6</td>
    <td>31.60</td>
    <td>27.03</td>
  </tr>
  <tr>
    <td><a href="https://docs-assets.developer.apple.com/ml-research/models/veclip/veclip_b16_12m.zip">VeCLIP-B/16</a></td>
    <td>224x224</td>
    <td>47.78</td>
    <td>31.62</td>
    <td>73.90</td>
    <td>55.68</td>
    <td>38.11</td>
    <td>32.53</td>
  </tr>
  <tr>
    <td rowspan="2">VeCap 100M</td>
    <td>CLIP-B/16</td>
    <td>224x224</td>
    <td>47.24</td>
    <td>30.61</td>
    <td>74.40</td>
    <td>57.16</td>
    <td>58.64</td>
    <td>50.96</td>
  </tr>
  <tr>
    <td><a href="https://docs-assets.developer.apple.com/ml-research/models/veclip/veclip_b16_100m.zip">VeCLIP-B/16</a></td>
    <td>224x224</td>
    <td>64.82</td>
    <td>46.12</td>
    <td>89.30</td>
    <td>73.10</td>
    <td>60.77</td>
    <td>54.17</td>
  </tr>
  <tr>
    <td rowspan="2">VeCap 200M</td>
    <td>CLIP-B/16</td>
    <td>224x224</td>
    <td>52.20</td>
    <td>34.97</td>
    <td>80.90</td>
    <td>63.26</td>
    <td>63.72</td>
    <td>56.84</td>
  </tr>
  <tr>
    <td><a href="https://docs-assets.developer.apple.com/ml-research/models/veclip/veclip_b16_200m.zip">VeCLIP-B/16</a></td>
    <td>224x224</td>
    <td>67.20</td>
    <td>48.40</td>
    <td>91.10</td>
    <td>76.32</td>
    <td>64.64</td>
    <td>57.67</td>
  </tr>
</tbody>
</table>


We further found our VeCap can also be complementary to other well-established filtering methods, e.g., [Data Filtering Network (DFN)](https://arxiv.org/abs/2309.17425). We also provide thosse checkpoints (referred to as **VeCap-DFN**) and report their performance below. 

<table>
<thead>
<tr>
<th rowspan="2">Backbone</th>
<th rowspan="2">Resolution</th>
<th rowspan="2">Data</th>
<th colspan="2">COCO (R@1)</th>
<th colspan="2">Flickr30k (R@1)</th>
<th rowspan="2">ImageNet</th>
<th rowspan="2">ImageNetV2</th>
</tr>
<tr>
<th>I2T</th>
<th>T2I</th>
<th>I2T</th>
<th>T2I</th>

</tr>
</thead>
<tbody>
<tr>
<td rowspan="3"><a href="https://docs-assets.developer.apple.com/ml-research/models/veclip/vecapdfn_clip_b16.zip">VeCap-DFN-B/16</a></td>
<td rowspan="3">224x224</td>
<td>DFN </td>
<td>62.96</td>
<td>43.20</td>
<td>87.10</td>
<td>70.44</td>
<td>76.15</td>
<td>68.19</td>
</tr>
<tr>
<td>VeCap 300M</td>
<td>64.74</td>
<td>44.58</td>
<td>90.10</td>
<td>73.14</td>
<td>46.43</td>
<td>41.15</td>
</tr>
<tr>
<td>DFN + VeCap 300M</td>
<td>66.28</td>
<td>45.12</td>
<td>88.80</td>
<td>73.56</td>
<td>76.19</td>
<td>69.58</td>
</tr>
<tr>
<td rowspan="1"><a href="https://docs-assets.developer.apple.com/ml-research/models/veclip/vecapdfn_clip_l14.zip">VeCap-DFN-L/14</a></td>
<td rowspan="1">224x224</td>
<td>DFN + VeCap 300M</td>
<td>71.06</td>
<td>51.13</td>
<td>93.10</td>
<td>80.96</td>
<td>81.95</td>
<td>75.48</td>
</tr>
<tr>
<td rowspan="2"><a href="https://docs-assets.developer.apple.com/ml-research/models/veclip/vecapdfn_clip_h14_336.zip">VeCap-DFN-H/14</a></td>
<td rowspan="1">336x336</td>
<td>DFN + VeCap 300M</td>
<td>72.78</td>
<td>52.33</td>
<td>93.60</td>
<td>82.64</td>
<td>83.07</td>
<td>76.37</td>
</tr>
</tbody>
</table>


## Citation

If you find VeCLIP useful, please cite using this BibTeX:

```bibtex
@article{lai2023scarcity,
  title={From scarcity to efficiency: Improving clip training via visual-enriched captions},
  author={Lai, Zhengfeng and Zhang, Haotian and Zhang, Bowen and Wu, Wentao and Bai, Haoping and Timofeev, Aleksei and Du, Xianzhi and Gan, Zhe and Shan, Jiulong and Chuah, Chen-Nee and Yang, Yinfei and others},
  journal={arXiv preprint arXiv:2310.07699},
  year={2023}
}
@article{fang2023data,
  title={Data filtering networks},
  author={Fang, Alex and Jose, Albin Madappally and Jain, Amit and Schmidt, Ludwig and Toshev, Alexander and Shankar, Vaishaal},
  journal={arXiv preprint arXiv:2309.17425},
  year={2023}
}
```

## Acknowledgement

- [axlearn](https://github.com/apple/axlearn): the codebase we use to train the models. 
- [huggingface transformers](https://huggingface.co/docs/transformers/en/index): Transformers provides APIs to load our trained models.