This model has been pushed to the Hub using the PytorchModelHubMixin integration:

How to use

pip install git+https://github.com/Arabic-Clip/Araclip.git
# load model
import numpy as np
from PIL import Image
from araclip import AraClip
model = AraClip.from_pretrained("Arabic-Clip/araclip")

# data
labels = ["ู‚ุทุฉ ุฌุงู„ุณุฉ", "ู‚ุทุฉ ุชู‚ูุฒ" ,"ูƒู„ุจ", "ุญุตุงู†"]
image = Image.open("cat.png")

# embed data 
image_features = model.embed(image=image)
text_features = np.stack([model.embed(text=label) for label in labels])

# search for most similar data
similarities = text_features @ image_features
best_match = labels[np.argmax(similarities)]

print(f"The image is most similar to: {best_match}")
# ู‚ุทุฉ ุฌุงู„ุณุฉ

image/png

Downloads last month
34
Safetensors
Model size
340M params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Spaces using Arabic-Clip/araclip 2