CLIP
Collection
Multi-modal models that can be used for Smart Search in Immich. Models are sorted by size in descending order.
•
35 items
•
Updated
•
8
This repo contains ONNX exports for the CLIP model openai/clip-vit-base-patch32. It separates the visual and textual encoders into separate models for the purpose of generating image and text embeddings.
This repo is specifically intended for use with Immich, a self-hosted photo library.