nielsr HF staff commited on
Commit
7d5121d
1 Parent(s): ef67d78

Use hf_hub_download

Browse files
Files changed (1) hide show
  1. README.md +26 -27
README.md CHANGED
@@ -1,28 +1,27 @@
1
- ---
2
- license: mit
3
- ---
4
-
5
- # MAGMA -- Multimodal Augmentation of Generative Models through Adapter-based Finetuning
6
-
7
- Paper: https://arxiv.org/abs/2112.05253
8
-
9
- ## Abstract
10
-
11
- Large-scale pretraining is fast becoming the norm in Vision-Language (VL) modeling. However, prevailing VL approaches are limited by the requirement for labeled data and the use of complex multi-step pretraining objectives. We present MAGMA - a simple method for augmenting generative language models with additional modalities using adapter-based finetuning. Building on Frozen, we train a series of VL models that autoregressively generate text from arbitrary combinations of visual and textual input. The pretraining is entirely end-to-end using a single language modeling objective, simplifying optimization compared to previous approaches. Importantly, the language model weights remain unchanged during training, allowing for transfer of encyclopedic knowledge and in-context learning abilities from language pretraining. MAGMA outperforms Frozen on open-ended generative tasks, achieving state of the art results on the OKVQA benchmark and competitive results on a range of other popular VL benchmarks, while pretraining on 0.2% of the number of samples used to train SimVLM.
12
-
13
- ## Usage
14
-
15
- ```py
16
- from magma import Magma
17
-
18
- from huggingface_hub import hf_hub_url, cached_download
19
-
20
- checkpoint_url = hf_hub_url(repo_id="osanseviero/magma", filename="model.pt")
21
- checkpoint_path = cached_download(checkpoint_url)
22
-
23
- model = Magma.from_checkpoint(
24
- config_path = "configs/MAGMA_v1.yml",
25
- checkpoint_path = checkpoint_path,
26
- device = 'cuda:0'
27
- )
28
  ```
1
+ ---
2
+ license: mit
3
+ ---
4
+
5
+ # MAGMA -- Multimodal Augmentation of Generative Models through Adapter-based Finetuning
6
+
7
+ Paper: https://arxiv.org/abs/2112.05253
8
+
9
+ ## Abstract
10
+
11
+ Large-scale pretraining is fast becoming the norm in Vision-Language (VL) modeling. However, prevailing VL approaches are limited by the requirement for labeled data and the use of complex multi-step pretraining objectives. We present MAGMA - a simple method for augmenting generative language models with additional modalities using adapter-based finetuning. Building on Frozen, we train a series of VL models that autoregressively generate text from arbitrary combinations of visual and textual input. The pretraining is entirely end-to-end using a single language modeling objective, simplifying optimization compared to previous approaches. Importantly, the language model weights remain unchanged during training, allowing for transfer of encyclopedic knowledge and in-context learning abilities from language pretraining. MAGMA outperforms Frozen on open-ended generative tasks, achieving state of the art results on the OKVQA benchmark and competitive results on a range of other popular VL benchmarks, while pretraining on 0.2% of the number of samples used to train SimVLM.
12
+
13
+ ## Usage
14
+
15
+ ```py
16
+ from magma import Magma
17
+
18
+ from huggingface_hub import hf_hub_download
19
+
20
+ checkpoint_path = hf_hub_download(repo_id="osanseviero/magma", filename="model.pt")
21
+
22
+ model = Magma.from_checkpoint(
23
+ config_path = "configs/MAGMA_v1.yml",
24
+ checkpoint_path = checkpoint_path,
25
+ device = 'cuda:0'
26
+ )
 
27
  ```