Update Readme on usage with Infinity
1
#36 opened 10 days ago
by
michaelfeil
In reranker, does the order of query and passage matter?
#35 opened 16 days ago
by
samwu66
`"id2label": { "0": "LABEL_0" }` in config, so are lower relevance scores better?
#34 opened about 1 month ago
by
shalinshah1993
如何使用MTEB评估BAAI/bge-reranker-v2-m3的C-MTEB Reranking项目
#33 opened about 2 months ago
by
IeohMingChan
importing error - pooling and weights
#32 opened 2 months ago
by
HansLeve
Is bge-reranker-v2-m3 pointwise, listwise, or pairwise methods?
1
#31 opened 2 months ago
by
Rebecca19990101
How to use this model in Amazon Sagemaker ?
3
#30 opened 3 months ago
by
Shalini-416
Availability in Pinecone's Inference API
1
#29 opened 3 months ago
by
gdj0nes
能否直接接收向量?
2
#27 opened 4 months ago
by
qinrong
What threshold should I use when filtering those irrelevant passages out against the sigmoid scores
2
#25 opened 5 months ago
by
shaunxu
request error: error sending request for url (https://huggingface.co/BAAI/bge-reranker-v2-m3/resolve/main/config.json):
3
#24 opened 5 months ago
by
qinrong
Sagemaker deployment to GPU
#23 opened 5 months ago
by
chaitanya87
Bad performance of bge-reranker-v2-gemma compare with bge-reranker-v2-m3
2
#22 opened 5 months ago
by
shaunxu
how to deploy BAAI/bge-reranker-v2-m3 on TEI??
2
#21 opened 5 months ago
by
qinrong
fine-tuning with evaluator
#20 opened 5 months ago
by
praveensonu
qps100以上 推荐下显卡
#18 opened 6 months ago
by
duzhihua
Missing Finetuning instruction for bge-reranker-v2-m3 ?
1
#17 opened 6 months ago
by
jackkwok
Multi-GPU at FP16? Examples. Large memory allocations.
1
#16 opened 6 months ago
by
flash9001
How to make it run on GPU?
1
#15 opened 6 months ago
by
HarshalPa
Add Sentence Transformers config
#14 opened 7 months ago
by
peakji
ONNX version
#13 opened 7 months ago
by
Malithius
Anyway to 'drop' model to save GPU ram?
1
#12 opened 8 months ago
by
rag-perplexity
cutoff score to consider for LLM call
4
#11 opened 8 months ago
by
karthikfds
bf16 vs fp16
1
#10 opened 8 months ago
by
Totole
Document length for v2-m3?
5
#9 opened 8 months ago
by
rag-perplexity
请问该v2-m3最大支持多少token
1
#8 opened 8 months ago
by
devillaws
有什么加速的方案吗?
3
#7 opened 8 months ago
by
hanswang1973
corss-lingual reranking
2
#6 opened 8 months ago
by
victorkeke
支持在langchain框架下使用吗
2
#4 opened 8 months ago
by
Nicole828
Missing pytorch_model.bin file?
1
#3 opened 8 months ago
by
baobo5625
need onnx model
#1 opened 8 months ago
by
LowPower