Safetensors
English
qwen2_vl
biology
medical
chemistry
AdaptLLM commited on
Commit
0b0bd76
1 Parent(s): 5d52c2c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -6
README.md CHANGED
@@ -42,8 +42,7 @@ We investigate domain adaptation of MLLMs through post-training, focusing on dat
42
  **Code**: [https://github.com/bigai-ai/QA-Synthesizer](https://github.com/bigai-ai/QA-Synthesizer)
43
 
44
  ## 1. To Chat with AdaMLLM
45
-
46
- Our model architecture aligns with the base model: Qwen-2-VL-Instruct. Below, we provide a usage example. For more advanced usage instructions, please refer to the official [Qwen-2-VL-Instruct repository](https://huggingface.co/Qwen/Qwen2-VL-2B-Instruct/edit/main/README.md).
47
 
48
  **Note:** For AdaMLLM, always place the image at the beginning of the input instruction in the messages.
49
 
@@ -61,24 +60,24 @@ from qwen_vl_utils import process_vision_info
61
 
62
  # default: Load the model on the available device(s)
63
  model = Qwen2VLForConditionalGeneration.from_pretrained(
64
- "AdaptLLM/medicine-Qwen2-VL-2B-Instruct", torch_dtype="auto", device_map="auto"
65
  )
66
 
67
  # We recommend enabling flash_attention_2 for better acceleration and memory saving, especially in multi-image and video scenarios.
68
  # model = Qwen2VLForConditionalGeneration.from_pretrained(
69
- # "AdaptLLM/medicine-Qwen2-VL-2B-Instruct",
70
  # torch_dtype=torch.bfloat16,
71
  # attn_implementation="flash_attention_2",
72
  # device_map="auto",
73
  # )
74
 
75
  # default processer
76
- processor = AutoProcessor.from_pretrained("AdaptLLM/medicine-Qwen2-VL-2B-Instruct")
77
 
78
  # The default range for the number of visual tokens per image in the model is 4-16384. You can set min_pixels and max_pixels according to your needs, such as a token count range of 256-1280, to balance speed and memory usage.
79
  # min_pixels = 256*28*28
80
  # max_pixels = 1280*28*28
81
- # processor = AutoProcessor.from_pretrained("AdaptLLM/medicine-Qwen2-VL-2B-Instruct", min_pixels=min_pixels, max_pixels=max_pixels)
82
 
83
 
84
  # NOTE: For AdaMLLM, always place the image at the beginning of the input instruction in the messages.
 
42
  **Code**: [https://github.com/bigai-ai/QA-Synthesizer](https://github.com/bigai-ai/QA-Synthesizer)
43
 
44
  ## 1. To Chat with AdaMLLM
45
+ Our model architecture aligns with the base model: Qwen-2-VL-Instruct. We provide a usage example below, and you may refer to the official [Qwen-2-VL-Instruct repository](https://huggingface.co/Qwen/Qwen2-VL-2B-Instruct/edit/main/README.md) for more advanced usage instructions.
 
46
 
47
  **Note:** For AdaMLLM, always place the image at the beginning of the input instruction in the messages.
48
 
 
60
 
61
  # default: Load the model on the available device(s)
62
  model = Qwen2VLForConditionalGeneration.from_pretrained(
63
+ "AdaptLLM/biomed-Qwen2-VL-2B-Instruct", torch_dtype="auto", device_map="auto"
64
  )
65
 
66
  # We recommend enabling flash_attention_2 for better acceleration and memory saving, especially in multi-image and video scenarios.
67
  # model = Qwen2VLForConditionalGeneration.from_pretrained(
68
+ # "AdaptLLM/biomed-Qwen2-VL-2B-Instruct",
69
  # torch_dtype=torch.bfloat16,
70
  # attn_implementation="flash_attention_2",
71
  # device_map="auto",
72
  # )
73
 
74
  # default processer
75
+ processor = AutoProcessor.from_pretrained("AdaptLLM/biomed-Qwen2-VL-2B-Instruct")
76
 
77
  # The default range for the number of visual tokens per image in the model is 4-16384. You can set min_pixels and max_pixels according to your needs, such as a token count range of 256-1280, to balance speed and memory usage.
78
  # min_pixels = 256*28*28
79
  # max_pixels = 1280*28*28
80
+ # processor = AutoProcessor.from_pretrained("AdaptLLM/biomed-Qwen2-VL-2B-Instruct", min_pixels=min_pixels, max_pixels=max_pixels)
81
 
82
 
83
  # NOTE: For AdaMLLM, always place the image at the beginning of the input instruction in the messages.