Update README.md
Browse files
README.md
CHANGED
@@ -47,8 +47,8 @@ The generative model can be used to caption images, answer questions about them.
|
|
47 |
```python
|
48 |
from transformers import AutoModel, AutoProcessor
|
49 |
|
50 |
-
model = AutoModel.from_pretrained("unum-cloud/uform-gen2-qwen-
|
51 |
-
processor = AutoProcessor.from_pretrained("unum-cloud/uform-gen2-qwen-
|
52 |
|
53 |
prompt = "Question or Instruction"
|
54 |
image = Image.open("image.jpg")
|
@@ -76,7 +76,7 @@ For captioning evaluation we measure CLIPScore and RefCLIPScore¹.
|
|
76 |
|
77 |
| Model | LLM Size | SQA | MME | MMBench | Average¹ |
|
78 |
| :---------------------------------- | -------: | -----:| ------:| --------:| --------:|
|
79 |
-
| UForm-Gen2-Qwen-
|
80 |
| MobileVLM v2 | 1.4B | 52.1 | 1302.8 | 57.7 | 36.81 |
|
81 |
| LLaVA-Phi | 2.7B | 68.4 | 1335.1 | 59.8 | 42.95 |
|
82 |
|
|
|
47 |
```python
|
48 |
from transformers import AutoModel, AutoProcessor
|
49 |
|
50 |
+
model = AutoModel.from_pretrained("unum-cloud/uform-gen2-qwen-500m", trust_remote_code=True)
|
51 |
+
processor = AutoProcessor.from_pretrained("unum-cloud/uform-gen2-qwen-500m", trust_remote_code=True)
|
52 |
|
53 |
prompt = "Question or Instruction"
|
54 |
image = Image.open("image.jpg")
|
|
|
76 |
|
77 |
| Model | LLM Size | SQA | MME | MMBench | Average¹ |
|
78 |
| :---------------------------------- | -------: | -----:| ------:| --------:| --------:|
|
79 |
+
| UForm-Gen2-Qwen-500m | 0.5B | 45.5 | 880.1 | 42.0 | 29.31 |
|
80 |
| MobileVLM v2 | 1.4B | 52.1 | 1302.8 | 57.7 | 36.81 |
|
81 |
| LLaVA-Phi | 2.7B | 68.4 | 1335.1 | 59.8 | 42.95 |
|
82 |
|