luodian commited on
Commit
ca95382
1 Parent(s): f94072d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +30 -2
README.md CHANGED
@@ -25,6 +25,34 @@ license: mit
25
  <sup>2</sup>Microsoft Research, Redmond
26
  </div>
27
 
28
- This weight is for **initilizing training for Otter**. It's directly converted from Openflamingo, we added tokens for downstream instruction tuning.
29
 
30
- It will be renamed to **OTTER-Image-LLaMA7B-Init** starting at Aug. 1st.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
25
  <sup>2</sup>Microsoft Research, Redmond
26
  </div>
27
 
28
+ This weight is for **initilizing training for Otter**. It's directly converted from Openflamingo.
29
 
30
+ You can load and try this model using
31
+ ```python
32
+ model = OtterForConditionalGeneration.from_pretrained("luodian/OTTER-LLaMA7B-Init", device_map="sequential")
33
+ model.text_tokenizer.padding_side = "left"
34
+ tokenizer = model.text_tokenizer
35
+ image_processor = transformers.CLIPImageProcessor()
36
+ model.eval()
37
+ ```
38
+
39
+ You can also start training Otter via the commands
40
+ ```python
41
+ python -m accelerate.commands.launch --config_file=./pipeline/accelerate_configs/accelerate_config_fsdp.yaml \
42
+ pipeline/train/instruction_following.py \
43
+ --pretrained_model_name_or_path=luodian/OTTER-LLaMA7B-Init \
44
+ --mimicit_path=/data/azure_storage/otter/mimicit/xx/xx_instructions.json \
45
+ --images_path=/data/azure_storage/otter/mimicit/xx/xx.json \
46
+ --batch_size=4 --num_epochs=1 --report_to_wandb \
47
+ --wandb_entity=ntu-slab \
48
+ --external_save_dir=/data/bli/checkpoints \
49
+ --save_hf_model \
50
+ --run_name=OTTER-MPT1B \
51
+ --wandb_project=OTTER-MPT1B \
52
+ --workers=4 \
53
+ --lr_scheduler=cosine \
54
+ --learning_rate=1e-5 \
55
+ --warmup_steps_ratio=0.01
56
+ ```
57
+
58
+ If you wish to init a video instruction tuning, you should add `"max_num_frames": 128` to `config.json` inside the folder.