is there possible get the sparse embedding?
3
#23 opened 3 days ago
by
weiminw
How to change the embedding dimision?
1
#19 opened about 1 month ago
by
storm2008
用eval_mteb.py算出来的mteb指标和Leaderboard展示的差距很大,不清楚为什么?
1
#16 opened about 1 month ago
by
YangGuang30
Customized Further Fine-Tuning by Users
#15 opened about 2 months ago
by
fwj
Model keeps cache of generation in Transformers (fixed using torch.no_grad())
1
#14 opened about 2 months ago
by
Pietroferr
gte-Qwen2-1.5B-instruct模型半精度推理时,模型输出结果NAN
2
#13 opened about 2 months ago
by
Erin
Qwen 2.5 1.5B retrain?
4
#12 opened 2 months ago
by
tomaarsen
mteb 测试速度问题
2
#10 opened 2 months ago
by
xiaopli11
Support of Xformer and FlashAttnention
1
#9 opened 3 months ago
by
le723z
ONNX.data
#8 opened 4 months ago
by
Saugatkafley
Fine-tunning
#5 opened 4 months ago
by
deleted
sequence classification
1
#3 opened 5 months ago
by
prudant
score mteb french
3
#2 opened 5 months ago
by
abhamadi
"Bidirectional attention"
2
#1 opened 5 months ago
by
olivierdehaene