mshukor commited on
Commit
c3325f7
β€’
1 Parent(s): 3eb682b
Files changed (1) hide show
  1. README.md +11 -151
README.md CHANGED
@@ -1,151 +1,11 @@
1
- # eP-ALM: Efficient Perceptual Augmentation of Language Models
2
-
3
- <p align="center">
4
- <img src="images/teaser.jpg" width="500"/>
5
- </p>
6
-
7
- Official implementation of the paper:
8
- - [eP-ALM: Efficient Perceptual Augmentation of Language Models](https://arxiv.org/abs/2303.11403)
9
-
10
- In this repo, you will find the pretrained models and code to train and evaluate eP-ALM on Image/Video/Audio-Text tasks.
11
-
12
- ## News
13
-
14
- * **[June-2023]** A new version of the paper is released on arXiv:
15
- * We re-evaluate the models with greedy decoding.
16
- * We add comparison with SoTA.
17
- * We add new experiments, including pretraining on CC3M and evaluation in zero-shot (check Appendix).
18
- * **[May-2023]** The code is optimized to train and evaluate with float16 mixed precision, using the accelerate library πŸ€—.
19
- * **[May-2023]** We found greedy decoding with beam search is significantly better than multinomial/random sampling.
20
- * **[20-March-2023]** The paper is released on arXiv.
21
- * **[March-2023]** The paper is submitted for publication and currently under review.
22
-
23
- ## Summary:
24
-
25
- * [Introduction](#introduction)
26
- * [Download](#download)
27
- * [Installation](#installation)
28
- * [Evaluation](#evaluation)
29
- * [Accelerated Training πŸ€—](#accelerated-training)
30
- * [Training](#training)
31
- * [Citation](#citation)
32
- * [Acknowledgment](#acknowledgment)
33
-
34
-
35
- ## Introduction
36
-
37
-
38
-
39
- Large Language Models (LLMs) have so far impressed the world, with unprecedented capabilities that emerge in models at large scales. On the vision side, transformer models (i.e., ViT) are following the same trend, achieving the best performance on challenging benchmarks. With the abundance of such unimodal models, a natural question arises; do we need also to follow this trend to tackle multimodal tasks? In this work, we propose to rather direct effort to efficient adaptations of existing models, and propose to augment Language Models with perception. Existing approaches for adapting pretrained models for vision-language tasks still rely on several key components that hinder their efficiency. In particular, they still train a large number of parameters, rely on large multimodal pretraining, use encoders (e.g., CLIP) trained on huge image-text datasets, and add significant inference overhead. In addition, most of these approaches have focused on Zero-Shot and In Context Learning, with little to no effort on direct finetuning. We investigate the minimal computational effort needed to adapt unimodal models for multimodal tasks and propose a new challenging setup, alongside different approaches, that efficiently adapts unimodal pretrained models. We show that by freezing more than 99\% of total parameters, training only one linear projection layer, and prepending only one trainable token, our approach (dubbed eP-ALM) significantly outperforms other baselines on VQA and Captioning across Image, Video, and Audio modalities, following the proposed setup.
40
-
41
-
42
- <p align="center">
43
- <img src="images/variants.jpg" width="500"/>
44
- </p>
45
-
46
-
47
- ### Results
48
-
49
- > Comparison of eP-ALM with text generation-based SoTA that train significant number of parameters, including methods with large-scale pretraining. Best and next best scores are bolded and underlined respectively. FT: Finetuning. ZS: Zero-shot.
50
-
51
- <p align="center">
52
- <img src="images/epalm_sota.png" width="700"/>
53
- </p>
54
-
55
-
56
-
57
- > Qualitative results of eP-ALM: the model is able to generate accurate answers and coherent descriptions of the image. Ground truth answers are highlighted in green (with multinomial sampling and OPT350M).
58
-
59
- <p align="center">
60
- <img src="images/qual.jpg" width="1000"/>
61
- </p>
62
-
63
-
64
-
65
- ## Download
66
-
67
- ### OPT Model
68
- First you need to download OPT models and tokenizers. You can use the following (for OPT-2.7B) to automatically download them:
69
-
70
- ```
71
- from transformers import AutoTokenizer, OPTModel
72
- tokenizer = AutoTokenizer.from_pretrained("facebook/opt-2.7b")
73
- model = OPTModel.from_pretrained("facebook/opt-2.7b")
74
- ```
75
-
76
- ### Pretrained Models
77
- We provide only the adaptation parameters (linear connection and Soft Prompt). You can download the following models:
78
-
79
- For eP-ALM_pt-L (with OPT-6.7B and ViT-L), trained with float16 mixed precision (`accelerate_training`):
80
- * VQA v2: [ePALM_pt_L](https://nuage.isir.upmc.fr/index.php/s/aSrCTKXsKQxAE72)
81
- * COCO Caption: [ePALM_pt_L](https://nuage.isir.upmc.fr/index.php/s/PNrytqFbJWJqdt3)
82
- * GQA: [ePALM_pt_L](https://nuage.isir.upmc.fr/index.php/s/9o7gk5gWY5ZNKLM)
83
-
84
- In the following, we provide smaller models that used to obtain the main results in the paper. Note that these models are trained with float32:
85
- * VQA v2: [ePALM](https://nuage.isir.upmc.fr/index.php/s/SMTJqfL62KC88z5)
86
- * COCO Caption: [ePALM](https://nuage.isir.upmc.fr/index.php/s/y9KZr9CEpe42443)
87
- * GQA: [ePALM](https://nuage.isir.upmc.fr/index.php/s/8rS84b4EH56CPZq)
88
- * MSR-VTT Video Caption: [ePALM](https://nuage.isir.upmc.fr/index.php/s/nCj7mz7NHgeYokP)
89
- * MSRVTT-QA Video QA: [ePALM](https://nuage.isir.upmc.fr/index.php/s/RysMQzH9sSf5b7P)
90
- * MSVD-QA: [ePALM](https://nuage.isir.upmc.fr/index.php/s/LCdLN3xg35jGCP2)
91
- * AudioCaps Audio Captioning: [ePALM](https://nuage.isir.upmc.fr/index.php/s/ZeeZc9zdFSgFTFC)
92
-
93
- ### Data
94
- More details on the download and the organization of datasets can be found [here](docs/datasets.md)
95
-
96
- ## Installation
97
- Main requirements:
98
- ```
99
- python >= 3.8+
100
- torch >= 1.12+
101
- transformers >= 4.24+
102
- accelerate >= 0.11.0
103
- ```
104
- More details can be found [here](docs/installation.md).
105
-
106
-
107
- ## Evaluation
108
- To evaluate the trained models, you can use the same scripts in `run_scripts/`, and just pass the best checkpoint path to the `--evaluate` arguments.
109
-
110
- To visualize the results and test on your own images, you can use this notebook `ePALM.ipynb`.
111
-
112
- You should use the same script used for training to evaluate the model (e.g., `run_scripts/accelerate_training` for models trained with accelerate).
113
-
114
- Note that you can evaluate the models trained with float32 with `run_scripts/accelerate_training`, but the you might obtain slightly different results (e.g., for caption we obtain 102 CIDEr instead of 97 as reported in the paper).
115
-
116
-
117
- ## Accelerated Training πŸ€—
118
- We optimized the code code based on the [accelerate](https://github.com/huggingface/accelerate) librairy. Mainly we train with mixed precision and keep the precision of the LM in float16, this significantly reduces the memory consumption (/2) and accelerates (x2) the training.
119
-
120
- For example, after specifying the path to `config` file, `data_dir` and `output_dir`,
121
- to launch a training of eP-ALM_pt-L on VQA v2:
122
-
123
- ```
124
- sh run_scripts/accelerate/image/ePALM_pt_L_vqa_acc.sh
125
- ```
126
-
127
- To resume training, specify the initialization checkpoint to the `--resume` argument.
128
-
129
-
130
- ## Training
131
- Previous models are trained with float32 precision. You can launch the training/evaluation of eP-ALM using the different scripts in `run_scripts/float32`. For example you can launch a training on VQA v2 from the following script:
132
-
133
- ```
134
- sh run_scripts/float32/image/ePALM_vqa.sh
135
- ```
136
-
137
- ## Citation
138
-
139
- ```
140
- @article{shukor2023ep,
141
- title={eP-ALM: Efficient Perceptual Augmentation of Language Models},
142
- author={Shukor, Mustafa and Dancette, Corentin and Cord, Matthieu},
143
- journal={arXiv preprint arXiv:2303.11403},
144
- year={2023}
145
- }
146
- ```
147
- ## Acknowledgment
148
-
149
- Some code was borrowed from [timm](https://github.com/rwightman/pytorch-image-models), [transformers](https://github.com/huggingface/transformers), [TimeSformer](https://github.com/facebookresearch/TimeSformer), and [VL-Adapter](https://github.com/ylsung/VL_adapter).
150
-
151
-
 
1
+ ---
2
+ title: eP-ALM
3
+ emoji: 🌍
4
+ colorFrom: purple
5
+ colorTo: pink
6
+ sdk: gradio
7
+ sdk_version: 3.12.0
8
+ app_file: app.py
9
+ pinned: true
10
+ license: apache-2.0
11
+ ---