GTE models
Collection
General Text Embedding Models Released by Alibaba Group
•
19 items
•
Updated
•
13
The gte-multilingual-reranker-base model is the first reranker model in the GTE family of models, featuring several key attributes:
Using Huggingface transformers (transformers>=4.36.0)
import torch
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model_name_or_path = "Alibaba-NLP/gte-multilingual-reranker-base"
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path)
model = AutoModelForSequenceClassification.from_pretrained(
model_name_or_path, trust_remote_code=True,
torch_dtype=torch.float16
)
model.eval()
pairs = [["ä¸å›½çš„首都在哪儿","北京"], ["what is the capital of China?", "北京"], ["how to implement quick sort in python?","Introduction of quick sort"]]
with torch.no_grad():
inputs = tokenizer(pairs, padding=True, truncation=True, return_tensors='pt', max_length=512)
scores = model(**inputs, return_dict=True).logits.view(-1, ).float()
print(scores)
# tensor([1.2315, 0.5923, 0.3041])
Usage with infinity:
Infinity, a MIT Licensed Inference RestAPI Server.
docker run --gpus all -v $PWD/data:/app/.cache -p "7997":"7997" \
michaelf34/infinity:0.0.68 \
v2 --model-id Alibaba-NLP/gte-multilingual-reranker-base --revision "main" --dtype bfloat16 --batch-size 32 --device cuda --engine torch --port 7997
Results of reranking based on multiple text retreival datasets
More detailed experimental results can be found in the paper.
In addition to the open-source GTE series models, GTE series models are also available as commercial API services on Alibaba Cloud.
Note that the models behind the commercial APIs are not entirely identical to the open-source models.
If you find our paper or models helpful, please consider cite:
@misc{zhang2024mgtegeneralizedlongcontexttext,
title={mGTE: Generalized Long-Context Text Representation and Reranking Models for Multilingual Text Retrieval},
author={Xin Zhang and Yanzhao Zhang and Dingkun Long and Wen Xie and Ziqi Dai and Jialong Tang and Huan Lin and Baosong Yang and Pengjun Xie and Fei Huang and Meishan Zhang and Wenjie Li and Min Zhang},
year={2024},
eprint={2407.19669},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2407.19669},
}