Swe-CLIP 500k

Usage

To use this model along with the original CLIP vision encoder you need to download the code and additional linear weights from the Multilingual-CLIP Github. Once this is done, you can load and use the model with the following code

from src import multilingual_clip

embeddings = model(['Älgen är skogens konung!', 'Alla isbjörnar är vänsterhänta'])
print(embeddings.shape)
# Yields: torch.Size([2, 640])