MLX
clip
File size: 1,138 Bytes
4570960
 
b0d393a
4570960
77d6c57
 
de0f4cd
77d6c57
de0f4cd
 
 
 
 
 
 
 
 
 
 
 
77d6c57
 
de0f4cd
77d6c57
 
 
 
de0f4cd
 
77d6c57
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
---
license: apache-2.0
library_name: mlx
---

# mlx-community/clip-vit-base-patch32
This model was converted to MLX format from [`clip-vit-base-patch32`](https://huggingface.co/openai/clip-vit-base-patch32).
Refer to the [original model card](https://huggingface.co/openai/clip-vit-base-patch32) for more details on the model.
## Use with mlx-examples

Download the repository 👇 

```
pip install huggingface_hub hf_transfer

export HF_HUB_ENABLE_HF_TRANSFER=1
huggingface-cli download --local-dir <LOCAL FOLDER PATH> mlx-community/clip-vit-base-patch32
```

Install `mlx-examples`.

```bash
git clone git@github.com:ml-explore/mlx-examples.git
cd clip
pip install -r requirements.txt
```

Run the model.

```python
from PIL import Image
import clip

model, tokenizer, img_processor = clip.load("mlx_model")
inputs = {
    "input_ids": tokenizer(["a photo of a cat", "a photo of a dog"]),
    "pixel_values": img_processor(
        [Image.open("assets/cat.jpeg"), Image.open("assets/dog.jpeg")]
    ),
}
output = model(**inputs)

# Get text and image embeddings:
text_embeds = output.text_embeds
image_embeds = output.image_embeds
```