🇫🇷 french-gte-multilingual-base

This model is a 51.4% smaller version of Alibaba-NLP/gte-multilingual-base for the French and English language, created using the mtem-pruner space.

This pruned model should perform similarly to the original model for French and English language tasks with a much smaller memory footprint. However, it may not perform well for other languages present in the original multilingual model as tokens not commonly used in French and English were removed from the original multilingual model's vocabulary.

Usage

You can use this model with the Transformers library:

from transformers import AutoModel, AutoTokenizer

model_name = "ijohn07/french-english-gte-base"
model = AutoModel.from_pretrained(model_name, trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True, use_fast=True)

Or with the sentence-transformers library:

from sentence_transformers import SentenceTransformer

model = SentenceTransformer("ijohn07/french-english-gte-base")

Credits: cc @antoinelouis

Downloads last month
16
Safetensors
Model size
148M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for ijohn07/french-english-gte-base

Quantized
(4)
this model