MLX
clip

mlx-community/clip-vit-base-patch32

This model was converted to MLX format from clip-vit-base-patch32. Refer to the original model card for more details on the model.

Use with mlx-examples

Download the repository 👇

pip install huggingface_hub hf_transfer

export HF_HUB_ENABLE_HF_TRANSFER=1
huggingface-cli download --local-dir <LOCAL FOLDER PATH> mlx-community/clip-vit-base-patch32

Install mlx-examples.

git clone git@github.com:ml-explore/mlx-examples.git
cd clip
pip install -r requirements.txt

Run the model.

from PIL import Image
import clip

model, tokenizer, img_processor = clip.load("mlx_model")
inputs = {
    "input_ids": tokenizer(["a photo of a cat", "a photo of a dog"]),
    "pixel_values": img_processor(
        [Image.open("assets/cat.jpeg"), Image.open("assets/dog.jpeg")]
    ),
}
output = model(**inputs)

# Get text and image embeddings:
text_embeds = output.text_embeds
image_embeds = output.image_embeds
Downloads last month
8
Inference API
Unable to determine this model’s pipeline type. Check the docs .