Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,48 @@
|
|
1 |
---
|
|
|
|
|
|
|
2 |
license: apache-2.0
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
quantized_by: cjpais
|
3 |
+
base_model: vikhyatk/moondream2
|
4 |
+
pipeline_tag: image-text-to-text
|
5 |
license: apache-2.0
|
6 |
---
|
7 |
+
|
8 |
+
A [llamafile](https://github.com/Mozilla-Ocho/llamafile) generated for [moondream2](https://huggingface.co/vikhyatk/moondream2)
|
9 |
+
|
10 |
+
Big thanks to @jartine and @vikhyat for their respective works on llamafile and moondream
|
11 |
+
|
12 |
+
# ORIGINAL MODEL CARD
|
13 |
+
|
14 |
+
moondream2 is a small vision language model designed to run efficiently on edge devices. Check out the [GitHub repository](https://github.com/vikhyat/moondream) for details, or try it out on the [Hugging Face Space](https://huggingface.co/spaces/vikhyatk/moondream2)!
|
15 |
+
|
16 |
+
**Benchmarks**
|
17 |
+
|
18 |
+
| Release | VQAv2 | GQA | TextVQA | TallyQA (simple) | TallyQA (full) |
|
19 |
+
| --- | --- | --- | --- | --- | --- |
|
20 |
+
| 2024-03-04 | 74.2 | 58.5 | 36.4 | - | - |
|
21 |
+
| 2024-03-06 | 75.4 | 59.8 | 43.1 | 79.5 | 73.2 |
|
22 |
+
| 2024-03-13 | 76.8 | 60.6 | 46.4 | 79.6 | 73.3 |
|
23 |
+
| **2024-04-02** (latest) | 77.7 | 61.7 | 49.7 | 80.1 | 74.2 |
|
24 |
+
|
25 |
+
**Usage**
|
26 |
+
|
27 |
+
```bash
|
28 |
+
pip install transformers einops
|
29 |
+
```
|
30 |
+
|
31 |
+
```python
|
32 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
33 |
+
from PIL import Image
|
34 |
+
|
35 |
+
model_id = "vikhyatk/moondream2"
|
36 |
+
revision = "2024-04-02"
|
37 |
+
model = AutoModelForCausalLM.from_pretrained(
|
38 |
+
model_id, trust_remote_code=True, revision=revision
|
39 |
+
)
|
40 |
+
tokenizer = AutoTokenizer.from_pretrained(model_id, revision=revision)
|
41 |
+
|
42 |
+
image = Image.open('<IMAGE_PATH>')
|
43 |
+
enc_image = model.encode_image(image)
|
44 |
+
print(model.answer_question(enc_image, "Describe this image.", tokenizer))
|
45 |
+
```
|
46 |
+
|
47 |
+
The model is updated regularly, so we recommend pinning the model version to a
|
48 |
+
specific release as shown above.
|