eagle0504 commited on
Commit
6e9115f
1 Parent(s): 24b4a71

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -2
README.md CHANGED
@@ -17,13 +17,19 @@ should probably proofread and complete it, then remove this comment. -->
17
 
18
  This model is a fine-tuned version of [google/paligemma-3b-pt-224](https://huggingface.co/google/paligemma-3b-pt-224) on the vq_av2 dataset.
19
 
 
 
 
 
20
  ## Model description
21
 
22
- More information needed
23
 
24
  ## Intended uses & limitations
25
 
26
- More information needed
 
 
27
 
28
  ## Training and evaluation data
29
 
 
17
 
18
  This model is a fine-tuned version of [google/paligemma-3b-pt-224](https://huggingface.co/google/paligemma-3b-pt-224) on the vq_av2 dataset.
19
 
20
+ Transformers PaliGemma 3B weights, pre-trained with 224*224 input images and 128 token input/output text sequences. The models are available in float32, bfloat16 and float16 formats for fine-tuning.
21
+
22
+ Original model: [Google/PaliGemma](https://huggingface.co/google/paligemma-3b-pt-224)
23
+
24
  ## Model description
25
 
26
+ PaliGemma is a versatile and lightweight vision-language model (VLM) inspired by PaLI-3 and based on open components such as the [SigLIP vision model](https://arxiv.org/abs/2303.15343) and the [Gemma language model](https://arxiv.org/abs/2403.08295). It takes both image and text as input and generates text as output, supporting multiple languages. It is designed for class-leading fine-tune performance on a wide range of vision-language tasks such as image and short video caption, visual question answering, text reading, object detection and object segmentation.
27
 
28
  ## Intended uses & limitations
29
 
30
+ PaliGemma is a single-turn vision language model not meant for conversational use, and it works best when fine-tuning to a specific use case.
31
+
32
+ You can configure which task the model will solve by conditioning it with task prefixes, such as “detect” or “segment”. The pretrained models were trained in this fashion to imbue them with a rich set of capabilities (question answering, captioning, segmentation, etc.). However, they are not designed to be used directly, but to be transferred (by fine-tuning) to specific tasks using a similar prompt structure. For interactive testing, you can use the "mix" family of models, which have been fine-tuned on a mixture of tasks. To see model [google/paligemma-3b-mix-448](https://huggingface.co/google/paligemma-3b-mix-448) in action, check this [Space](https://huggingface.co/spaces/big-vision/paligemma-hf) that uses the Transformers codebase.
33
 
34
  ## Training and evaluation data
35