RaushanTurganbay HF staff commited on
Commit
4a3eb1f
·
verified ·
1 Parent(s): cc1dff4

update for chat template

Browse files
Files changed (1) hide show
  1. README.md +20 -0
README.md CHANGED
@@ -93,6 +93,26 @@ output = model.generate(**inputs, max_new_tokens=100)
93
  print(processor.decode(output[0], skip_special_tokens=True))
94
  ```
95
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
96
  ### Model optimization
97
 
98
  #### 4-bit quantization through `bitsandbytes` library
 
93
  print(processor.decode(output[0], skip_special_tokens=True))
94
  ```
95
 
96
+ -----------
97
+ From transformers>=v4.48, you can also pass image url or local path to the conversation history, and let the chat template handle the rest.
98
+ Chat template will load the image for you and return inputs in `torch.Tensor` which you can pass directly to `model.generate()`
99
+
100
+ ```python
101
+ messages = [
102
+ {
103
+ "role": "user",
104
+ "content": [
105
+ {"type": "image", "url": "https://www.ilankelman.org/stopsigns/australia.jpg"}
106
+ {"type": "text", "text": "What is shown in this image?"},
107
+ ],
108
+ },
109
+ ]
110
+
111
+ inputs = processor.apply_chat_template(messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors"pt")
112
+ output = model.generate(**inputs, max_new_tokens=50)
113
+ ```
114
+
115
+
116
  ### Model optimization
117
 
118
  #### 4-bit quantization through `bitsandbytes` library