File size: 1,643 Bytes
5363f6e
 
 
e9a2d5d
 
 
 
 
 
 
 
 
 
 
ef67d78
e9a2d5d
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
---

license: mit
---


# MAGMA -- Multimodal Augmentation of Generative Models through Adapter-based Finetuning

Paper: https://arxiv.org/abs/2112.05253

## Abstract

Large-scale pretraining is fast becoming the norm in Vision-Language (VL) modeling. However, prevailing VL approaches are limited by the requirement for labeled data and the use of complex multi-step pretraining objectives. We present MAGMA - a simple method for augmenting generative language models with additional modalities using adapter-based finetuning. Building on Frozen, we train a series of VL models that autoregressively generate text from arbitrary combinations of visual and textual input. The pretraining is entirely end-to-end using a single language modeling objective, simplifying optimization compared to previous approaches. Importantly, the language model weights remain unchanged during training, allowing for transfer of encyclopedic knowledge and in-context learning abilities from language pretraining. MAGMA outperforms Frozen on open-ended generative tasks, achieving state of the art results on the OKVQA benchmark and competitive results on a range of other popular VL benchmarks, while pretraining on 0.2% of the number of samples used to train SimVLM.

## Usage

```py

from magma import Magma



from huggingface_hub import hf_hub_url, cached_download



checkpoint_url = hf_hub_url(repo_id="osanseviero/magma", filename="model.pt")

checkpoint_path = cached_download(checkpoint_url)



model = Magma.from_checkpoint(

    config_path = "configs/MAGMA_v1.yml",

    checkpoint_path = checkpoint_path,

    device = 'cuda:0'

)

```