Leyo commited on
Commit
258ef74
1 Parent(s): 5edd82e

smalll fixes

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -207,7 +207,7 @@ We start from the base IDEFICS models and fine-tune the models by unfreezing all
207
 
208
  We note that all these datasets were obtained by using ChatGPT/GPT-4 in one way or another.
209
 
210
- Additionally, we found it beneficial to include the pre-training data in the fine-tuning with the following sampling ratios: 5.1% of image-text pairs and 31.0 of multimodal web documents.
211
 
212
  The training objective is the standard next token prediction. We use the following hyper and training parameters:
213
  | Parameters | | IDEFICS-80b-instruct | IDEFICS-9b-instruct |
@@ -229,7 +229,7 @@ The training objective is the standard next token prediction. We use the followi
229
 
230
  # Evaluation
231
 
232
- ## IDEFICS base
233
 
234
  We follow the evaluation protocol of Flamingo and evaluate IDEFICS on a suite of downstream image-text benchmarks ranging from visual question answering to image captioning.
235
 
 
207
 
208
  We note that all these datasets were obtained by using ChatGPT/GPT-4 in one way or another.
209
 
210
+ Additionally, we found it beneficial to include the pre-training data in the fine-tuning with the following sampling ratios: 5.1% of image-text pairs and 30.7% of OBELICS multimodal web documents.
211
 
212
  The training objective is the standard next token prediction. We use the following hyper and training parameters:
213
  | Parameters | | IDEFICS-80b-instruct | IDEFICS-9b-instruct |
 
229
 
230
  # Evaluation
231
 
232
+ ## IDEFICS
233
 
234
  We follow the evaluation protocol of Flamingo and evaluate IDEFICS on a suite of downstream image-text benchmarks ranging from visual question answering to image captioning.
235