VictorSanh commited on
Commit
dcb5a04
β€’
1 Parent(s): 13bf038
Files changed (1) hide show
  1. README.md +42 -4
README.md CHANGED
@@ -9,6 +9,15 @@ datasets:
9
  - pixparse/pdfa-eng-wds
10
  - wendlerc/RenderedText
11
  - HuggingFaceM4/the_cauldron
 
 
 
 
 
 
 
 
 
12
  language:
13
  - en
14
  tags:
@@ -52,7 +61,7 @@ For optimal results, we recommend fine-tuning `idefics2-8b` on one's specific us
52
 
53
  As a starting point, we provide fine-tuning codes that can be adapted for one's particular scenario:
54
  - With the [TRL library](https://github.com/huggingface/trl): TODO
55
- - With the [Hugging Face Trainer](https://huggingface.co/docs/transformers/main/en/main_classes/trainer#api-reference%20][%20transformers.Trainer): TODO
56
 
57
 
58
  # Technical summary
@@ -61,7 +70,21 @@ IDEFICS-2 exhibits strong performance for a model of its size (8B parameters) wh
61
 
62
  <details><summary>For more details, expand the result table.</summary>
63
 
64
- TODO: performance table
 
 
 
 
 
 
 
 
 
 
 
 
 
 
65
 
66
  </details>
67
 
@@ -71,6 +94,21 @@ TODO: performance table
71
  - We departed from the IDEFICS-1's architecture (gated cross-attentions) and **simplified the integration of visual features** into the language backbone. The images are fed to the vision encoder followed by a learned [Perceiver](https://arxiv.org/abs/2103.03206) pooling and a MLP modality projection. That pooled sequence is then concatenated with the text embeddings to obtain an (interleaved) sequence of image(s) and text(s).
72
  - All of these improvements along with better pre-trained backbones yield a significant jump in performance over IDEFICS-1 for a model that is **10x smaller**.
73
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
74
  More details (training procedure, data selection, hyper-parameters, etc.) along with lessons learned from our ablations will be available in an upcoming technical report.
75
 
76
 
@@ -199,7 +237,7 @@ Flash attention 2 support is available both for `idefics2-8b-base` and `idefics2
199
 
200
  </details>
201
 
202
- **4 bit quantization and module fusing**
203
 
204
  <details><summary>Click to expand.</summary>
205
 
@@ -228,7 +266,7 @@ model = AutoModelForVision2Seq.from_pretrained(
228
  ).to(DEVICE)
229
  ```
230
 
231
- </details>
232
 
233
  # Bias, Risks, and Limitations
234
 
 
9
  - pixparse/pdfa-eng-wds
10
  - wendlerc/RenderedText
11
  - HuggingFaceM4/the_cauldron
12
+ - teknium/OpenHermes-2.5
13
+ - GAIR/lima
14
+ - databricks/databricks-dolly-15k
15
+ - meta-math/MetaMathQA
16
+ - TIGER-Lab/MathInstruct
17
+ - microsoft/orca-math-word-problems-200k
18
+ - camel-ai/math
19
+ - AtlasUnified/atlas-math-sets
20
+ - tiedong/goat
21
  language:
22
  - en
23
  tags:
 
61
 
62
  As a starting point, we provide fine-tuning codes that can be adapted for one's particular scenario:
63
  - With the [TRL library](https://github.com/huggingface/trl): TODO
64
+ - With the [Hugging Face Trainer](https://huggingface.co/docs/transformers/main/en/main_classes/trainer#api-reference%20][%20transformers.Trainer): [Tutorial notebook](https://colab.research.google.com/drive/1rm3AGquGEYXfeeizE40bbDtcWh5S4Nlq?usp=sharing)
65
 
66
 
67
  # Technical summary
 
70
 
71
  <details><summary>For more details, expand the result table.</summary>
72
 
73
+ | Model | Open weights | Size | # tokens per image | MMMU (val/test) | MathVista (testmini) | TextVQA (val) | MMBench (test)| VQAv2 (test-dev) | DocVQA (test)
74
+ |--------------|-------------|------|--------------------|-----------|-----------|---------|---------|---------|---------|
75
+ | DeepSeek-VL | βœ… | 7B | 576 | 36.6/- | 36.1 | - | 73.2 | - | - |
76
+ | LLaVa-NeXT-13B | βœ… | 13B | 2880 | 36.2/- | 35.3 | 67.1 | 70.0 | 82.8 | - |
77
+ | LLaVa-NeXT-34B | βœ… | 34B | 2880 | 51.1/44.7 | 46.5 | 69.5 | 79.3 | 83.7 | - | - |
78
+ | MM1-Chat-7B | ❌ | 7B | 720 | 37.0/35.6 | 35.9 | 72.8 | 72.3 | - | - |
79
+ | MM1-Chat-30B | ❌ | 30B | 720 | 44.7/40.3 | 39.4 | 73.5 | 75.1 | 83.7 | |
80
+ | Gemini 1.0 Pro | ❌ | ? | ? | 47.9/- | 45.2 | 74.6 | - | 71.2 | 88.1 |
81
+ | Gemini 1.5 Pro | ❌ | ? | ? | 58.5/- | 52.1 | 73.5 | - | 73.2 | 86.5 |
82
+ | Claude 3 Haiku | ❌ |? | ? | 50.2/- | 46.4 | - | - | - | 88.8 |
83
+ | | | | | | | |
84
+ | IDEFICS-1 instruct (32-shots) | βœ… | 80B | - | - | - | 39.3 | - | 68.8 | - |
85
+ | | | | | | | |
86
+ | IDEFICS-2 (w/o image splitting) | βœ… | 8B | 64 | 43.5/37.9 | 51.6 | 70.4 | 76.8 | 80.8 | 67.3 |
87
+ | IDEFICS-2 (w/ image splitting) | βœ… | 8B | 320 | 43.0/37.7 | 51.4 | 73.0 | 76.7 | 81.2 | 74.0 |
88
 
89
  </details>
90
 
 
94
  - We departed from the IDEFICS-1's architecture (gated cross-attentions) and **simplified the integration of visual features** into the language backbone. The images are fed to the vision encoder followed by a learned [Perceiver](https://arxiv.org/abs/2103.03206) pooling and a MLP modality projection. That pooled sequence is then concatenated with the text embeddings to obtain an (interleaved) sequence of image(s) and text(s).
95
  - All of these improvements along with better pre-trained backbones yield a significant jump in performance over IDEFICS-1 for a model that is **10x smaller**.
96
 
97
+ IDEFICS-2 is trained in 2 stages for maximum efficiency. In a first stage, images are fed to the model at SigLIP's native resolution (squares of 384 x 384). In the second stage, images are fed to the model at their native resolution (with a maximum of 980 and a minimum of 378) and native aspect ratio. Since high resolution is necessary for OCR data, we add PDFA, Rendered-Text, and IDL to OBELICS, LAION Coco and PMD during that second stage.
98
+
99
+ Following this, we perform instruction fine-tuning on [The Cauldron](https://huggingface.co/datasets/HuggingFaceM4/the_cauldron), a collection of 50 manually curated vision-language datasets along with 9 text-only instruction fine-tuning datasets:
100
+ - [OpenHermes-2.5](https://huggingface.co/datasets/teknium/OpenHermes-2.5)
101
+ - [lima](https://huggingface.co/datasets/GAIR/lima)
102
+ - [databricks-dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k)
103
+ - [MetaMathQA](https://huggingface.co/datasets/meta-math/MetaMathQA)
104
+ - [MathInstruct](https://huggingface.co/datasets/TIGER-Lab/MathInstruct)
105
+ - [orca-math-word-problems-200k](https://huggingface.co/datasets/microsoft/orca-math-word-problems-200k)
106
+ - [math](https://huggingface.co/datasets/camel-ai/math)
107
+ - [atlas-math-sets](https://huggingface.co/datasets/AtlasUnified/atlas-math-sets)
108
+ - [goat](https://huggingface.co/datasets/tiedong/goat)
109
+
110
+ We use Lora to train the parameters initialized from pre-trained backbones and full fine-tuning for newly initialized parameters (modality connector), as we find this strategy to be more stable as long as more computationally efficient.
111
+
112
  More details (training procedure, data selection, hyper-parameters, etc.) along with lessons learned from our ablations will be available in an upcoming technical report.
113
 
114
 
 
237
 
238
  </details>
239
 
240
+ <!-- **4 bit quantization and module fusing**
241
 
242
  <details><summary>Click to expand.</summary>
243
 
 
266
  ).to(DEVICE)
267
  ```
268
 
269
+ </details> -->
270
 
271
  # Bias, Risks, and Limitations
272