File size: 2,182 Bytes
f84a69c
97e9e9c
 
 
f84a69c
490e5bd
 
f84a69c
97e9e9c
 
 
33860e4
97e9e9c
2527f0c
 
 
 
 
 
 
ce5343d
 
 
 
 
 
 
97e9e9c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
---
quantized_by: cjpais
base_model: vikhyatk/moondream2
pipeline_tag: image-text-to-text
license: apache-2.0
tags:
- llamafile
---

A [llamafile](https://github.com/Mozilla-Ocho/llamafile) generated for [moondream2](https://huggingface.co/vikhyatk/moondream2)

Big thanks to [@jartine](https://huggingface.co/jartine) and [@vikhyat](https://huggingface.co/vikhyatk/moondream2) for their respective works on llamafile and moondream

## How to Run (on macos and linux)


1. Download moondream2.llamafile
2. `chmod +x moondream2.llamafile` - make it executable
3. `./moondream2.llamafile` - run the llama.cpp server

## Versions

1. [Q5_M](https://huggingface.co/cjpais/moondream2-llamafile/resolve/main/moondream2-q5_k.llamafile?download=true)
2. [Q8_0](https://huggingface.co/cjpais/moondream2-llamafile/resolve/main/moondream2-q8.llamafile?download=true)

From my short testing the Q8 is noticeably better.

# ORIGINAL MODEL CARD

moondream2 is a small vision language model designed to run efficiently on edge devices. Check out the [GitHub repository](https://github.com/vikhyat/moondream) for details, or try it out on the [Hugging Face Space](https://huggingface.co/spaces/vikhyatk/moondream2)!

**Benchmarks**

| Release | VQAv2 | GQA | TextVQA | TallyQA (simple) | TallyQA (full) |
| --- | --- | --- | --- | --- | --- |
| 2024-03-04 | 74.2 | 58.5 | 36.4 | - | - |
| 2024-03-06 | 75.4 | 59.8 | 43.1 | 79.5 | 73.2 |
| 2024-03-13 | 76.8 | 60.6 | 46.4 | 79.6 | 73.3 |
| **2024-04-02** (latest) | 77.7 | 61.7 | 49.7 | 80.1 | 74.2 |

**Usage**

```bash
pip install transformers einops
```

```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from PIL import Image

model_id = "vikhyatk/moondream2"
revision = "2024-04-02"
model = AutoModelForCausalLM.from_pretrained(
    model_id, trust_remote_code=True, revision=revision
)
tokenizer = AutoTokenizer.from_pretrained(model_id, revision=revision)

image = Image.open('<IMAGE_PATH>')
enc_image = model.encode_image(image)
print(model.answer_question(enc_image, "Describe this image.", tokenizer))
```

The model is updated regularly, so we recommend pinning the model version to a
specific release as shown above.