RaushanTurganbay HF staff Xenova HF staff commited on
Commit
251a81f
1 Parent(s): 330ba1e

Fix typos (#1)

Browse files

- Fix typos (96af9556a05b911cd9fb3c9de2fef47677048002)


Co-authored-by: Joshua <Xenova@users.noreply.huggingface.co>

Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -46,7 +46,7 @@ https://llava-vl.github.io/
46
  ## How to use the model
47
 
48
  First, make sure to have `transformers` installed from [branch](https://github.com/huggingface/transformers/pull/32673) or `transformers >= 4.45.0`.
49
- The model supports multi-image and multi-prompt generation. Meaning that you can pass multiple images in your prompt. Make sure also to follow the correct prompt template by applyong chat template:
50
 
51
  ### Using `pipeline`:
52
 
@@ -74,7 +74,7 @@ conversation = [
74
  ],
75
  },
76
  ]
77
- prompt = processor.apply_chat_template(conversation, add_generation_prompt=True)
78
 
79
  outputs = pipe(image, prompt=prompt, generate_kwargs={"max_new_tokens": 200})
80
  print(outputs)
 
46
  ## How to use the model
47
 
48
  First, make sure to have `transformers` installed from [branch](https://github.com/huggingface/transformers/pull/32673) or `transformers >= 4.45.0`.
49
+ The model supports multi-image and multi-prompt generation. Meaning that you can pass multiple images in your prompt. Make sure also to follow the correct prompt template by applying the chat template:
50
 
51
  ### Using `pipeline`:
52
 
 
74
  ],
75
  },
76
  ]
77
+ prompt = pipe.processor.apply_chat_template(conversation, add_generation_prompt=True)
78
 
79
  outputs = pipe(image, prompt=prompt, generate_kwargs={"max_new_tokens": 200})
80
  print(outputs)