RaushanTurganbay HF staff commited on
Commit
df44d4d
·
verified ·
1 Parent(s): 61a33f7

update for chat template

Browse files
Files changed (1) hide show
  1. README.md +22 -0
README.md CHANGED
@@ -119,6 +119,28 @@ output = model.generate(**inputs_video, max_new_tokens=100, do_sample=False)
119
  print(processor.decode(output[0][2:], skip_special_tokens=True))
120
  ```
121
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
122
  ### Inference with images as inputs
123
 
124
  To generate from images use the below code after loading the model as shown above:
 
119
  print(processor.decode(output[0][2:], skip_special_tokens=True))
120
  ```
121
 
122
+ -----------
123
+ From transformers>=v4.48, you can also pass image/video url or local path to the conversation history, and let the chat template handle the rest.
124
+ For video you also need to indicate how many `num_frames` to sample from video, otherwise the whole video will be loaded.
125
+ Chat template will load the image/video for you and return inputs in `torch.Tensor` which you can pass directly to `model.generate()`.
126
+
127
+ ```python
128
+ messages = [
129
+ {
130
+ "role": "user",
131
+ "content": [
132
+ {"type": "image", "url": "https://www.ilankelman.org/stopsigns/australia.jpg"}
133
+ {"type": "video", "path": "my_video.mp4"},
134
+ {"type": "text", "text": "What is shown in this image and video?"},
135
+ ],
136
+ },
137
+ ]
138
+
139
+ inputs = processor.apply_chat_template(messages, num_frames=8, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors"pt")
140
+ output = model.generate(**inputs, max_new_tokens=50)
141
+ ```
142
+
143
+
144
  ### Inference with images as inputs
145
 
146
  To generate from images use the below code after loading the model as shown above: