--- license: apache-2.0 library_name: mlx --- # mlx-community/clip-vit-large-patch14 This model was converted to MLX format from [`clip-vit-large-patch14`](https://huggingface.co/clip-vit-large-patch14). Refer to the [original model card](https://huggingface.co/openai/clip-vit-large-patch14) for more details on the model. ## Use with mlx-examples Download the repository 👇 ``` pip install huggingface_hub hf_transfer export HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download --local-dir mlx-community/clip-vit-large-patch14 ``` Install `mlx-examples`. ```bash git clone git@github.com:ml-explore/mlx-examples.git cd clip pip install -r requirements.txt ``` Run the model. ```python from PIL import Image import clip model, tokenizer, img_processor = clip.load("mlx_model") inputs = { "input_ids": tokenizer(["a photo of a cat", "a photo of a dog"]), "pixel_values": img_processor( [Image.open("assets/cat.jpeg"), Image.open("assets/dog.jpeg")] ), } output = model(**inputs) # Get text and image embeddings: text_embeds = output.text_embeds image_embeds = output.image_embeds ```