andito HF staff commited on
Commit
1b1b4a5
1 Parent(s): f1140e5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +97 -31
README.md CHANGED
@@ -5,9 +5,9 @@ tags: []
5
 
6
  # Model Card for Model ID
7
 
8
- <!-- Provide a quick summary of what the model is/does. -->
9
-
10
 
 
11
 
12
  ## Model Details
13
 
@@ -17,13 +17,10 @@ tags: []
17
 
18
  This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
19
 
20
- - **Developed by:** [More Information Needed]
21
- - **Funded by [optional]:** [More Information Needed]
22
- - **Shared by [optional]:** [More Information Needed]
23
- - **Model type:** [More Information Needed]
24
- - **Language(s) (NLP):** [More Information Needed]
25
- - **License:** [More Information Needed]
26
- - **Finetuned from model [optional]:** [More Information Needed]
27
 
28
  ### Model Sources [optional]
29
 
@@ -31,55 +28,124 @@ This is the model card of a 🤗 transformers model that has been pushed on the
31
 
32
  - **Repository:** [More Information Needed]
33
  - **Paper [optional]:** [More Information Needed]
34
- - **Demo [optional]:** [More Information Needed]
35
 
36
  ## Uses
37
 
38
- <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
 
40
- ### Direct Use
41
 
42
- <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
 
44
- [More Information Needed]
45
 
46
- ### Downstream Use [optional]
 
47
 
48
- <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
 
50
- [More Information Needed]
51
 
52
- ### Out-of-Scope Use
53
 
54
- <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
 
 
 
 
55
 
56
- [More Information Needed]
57
 
58
- ## Bias, Risks, and Limitations
 
 
59
 
60
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
 
 
 
 
61
 
62
- [More Information Needed]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
63
 
64
- ### Recommendations
 
 
 
65
 
66
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
 
 
67
 
68
- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
 
69
 
70
- ## How to Get Started with the Model
71
 
72
- Use the code below to get started with the model.
 
 
 
 
 
 
 
 
 
 
 
73
 
74
- [More Information Needed]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
75
 
76
  ## Training Details
77
 
78
  ### Training Data
79
 
80
- <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
 
82
- [More Information Needed]
83
 
84
  ### Training Procedure
85
 
 
5
 
6
  # Model Card for Model ID
7
 
8
+ SmolVLM is a compact open multimodal model that accepts arbitrary sequences of image and text inputs to produce text outputs. Designed for efficiency, SmolVLM can answer questions about images, describe visual content, create stories grounded on multiple images, or function as a pure language model without visual inputs. Its lightweight architecture makes it suitable for on-device applications while maintaining strong performance on multimodal tasks.
 
9
 
10
+ We release the checkpoints under the Apache 2.0 license.
11
 
12
  ## Model Details
13
 
 
17
 
18
  This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
19
 
20
+ - **Developed by:** Hugging Face 🤗
21
+ - **Model type:** Multi-modal model (image+text)
22
+ - **Language(s) (NLP):** English
23
+ - **License:** Apache 2.0
 
 
 
24
 
25
  ### Model Sources [optional]
26
 
 
28
 
29
  - **Repository:** [More Information Needed]
30
  - **Paper [optional]:** [More Information Needed]
31
+ - **Demo [optional]:** https://huggingface.co/spaces/HuggingFaceTB/SmolVLM
32
 
33
  ## Uses
34
 
35
+ SmolVLM can be used for inference on multimodal (image + text) tasks where the input comprises text queries along with one or more images. Text and images can be interleaved arbitrarily, enabling tasks like image captioning, visual question answering, and storytelling based on visual content. The model does not support image generation.
36
 
37
+ To fine-tune SmolVLM on a specific task, you can follow the fine-tuning tutorial.
38
 
39
+ ### Technical Summary
40
 
41
+ SmolVLM leverages the lightweight SmolLM2 language model to provide a compact yet powerful multimodal experience. It introduces several changes compared to previous models:
42
 
43
+ - Image compression: We introduce a more radical image compression compared to Idefics3 to enable the model to infer faster and use less RAM.
44
+ - Visual Token Encoding: It uses 81 visual tokens to encode image patches of size 384*384. Larger images are divided into patches, each encoded separately, enhancing efficiency without compromising performance.
45
 
46
+ More details about the training and architecture are available in our technical report.
47
 
 
48
 
49
+ ### How to get started
50
 
51
+ ```python
52
+ import torch
53
+ from PIL import Image
54
+ from transformers import AutoProcessor, AutoModelForVision2Seq
55
+ from transformers.image_utils import load_image
56
 
57
+ DEVICE = "cuda" if torch.cuda.is_available() else "cpu"
58
 
59
+ # Load images
60
+ image1 = load_image("https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg")
61
+ image2 = load_image("https://huggingface.co/spaces/merve/chameleon-7b/resolve/main/bee.jpg")
62
 
63
+ # Initialize processor and model
64
+ processor = AutoProcessor.from_pretrained("HuggingFaceTB/SmolVLM-Instruct")
65
+ model = AutoModelForVision2Seq.from_pretrained(
66
+ "HuggingFaceTB/SmolVLM-Instruct", torch_dtype=torch.bfloat16
67
+ ).to(DEVICE)
68
 
69
+ # Create input messages
70
+ messages = [
71
+ {
72
+ "role": "user",
73
+ "content": [
74
+ {"type": "image"},
75
+ {"type": "text", "text": "What do we see in this image?"}
76
+ ]
77
+ },
78
+ {
79
+ "role": "assistant",
80
+ "content": [
81
+ {"type": "text", "text": "This image shows a city skyline with prominent landmarks."}
82
+ ]
83
+ },
84
+ {
85
+ "role": "user",
86
+ "content": [
87
+ {"type": "image"},
88
+ {"type": "text", "text": "And how about this image?"}
89
+ ]
90
+ }
91
+ ]
92
 
93
+ # Prepare inputs
94
+ prompt = processor.apply_chat_template(messages, add_generation_prompt=True)
95
+ inputs = processor(text=prompt, images=[image1, image2], return_tensors="pt")
96
+ inputs = {k: v.to(DEVICE) for k, v in inputs.items()}
97
 
98
+ # Generate outputs
99
+ generated_ids = model.generate(**inputs, max_new_tokens=500)
100
+ generated_texts = processor.batch_decode(generated_ids, skip_special_tokens=True)
101
 
102
+ print(generated_texts[0])
103
+ ```
104
 
 
105
 
106
+ ### Model optimizations
107
+
108
+ **Precision**: For better performance, load and run the model in half-precision (`torch.float16` or `torch.bfloat16`) if your hardware supports it.
109
+
110
+ ```python
111
+ model = AutoModelForVision2Seq.from_pretrained(
112
+ "HuggingFaceTB/SmolVLM-Instruct",
113
+ torch_dtype=torch.bfloat16
114
+ ).to(DEVICE)
115
+ ```
116
+
117
+ **Vision Encoder Efficiency**: Adjust the image resolution by setting size={"longest_edge": N*384} when initializing the processor, where N is your desired value. The default N=4 works well, but for documents, N=5 might be beneficial. Decreasing N can save GPU memory for lower-resolution images. This is also useful if you want to fine-tune on videos.
118
 
119
+
120
+ ## Misuse and Out-of-scope Use
121
+
122
+ SmolVLM is not intended for high-stakes scenarios or critical decision-making processes that affect an individual's well-being or livelihood. The model may produce content that appears factual but may not be accurate. Misuse includes, but is not limited to:
123
+
124
+ - Prohibited Uses:
125
+ - Evaluating or scoring individuals (e.g., in employment, education, credit)
126
+ - Critical automated decision-making
127
+ - Generating unreliable factual content
128
+ - Malicious Activities:
129
+ - Spam generation
130
+ - Disinformation campaigns
131
+ - Harassment or abuse
132
+ - Unauthorized surveillance
133
+
134
+ ### License
135
+
136
+ SmolVLM is built upon the following pre-trained models:
137
+
138
+ https://huggingface.co/google/siglip-so400m-patch14-384
139
+ https://huggingface.co/HuggingFaceTB/SmolLM2-1.7B-Instruct
140
+
141
+ We release the SmolVLM checkpoints under the Apache 2.0 license.
142
 
143
  ## Training Details
144
 
145
  ### Training Data
146
 
147
+ The training data
148
 
 
149
 
150
  ### Training Procedure
151