Question about MTEB Evaluation and max_seq_length Settings for e5-mistral-7b-instruct
#47 opened 9 days ago
by
george31
Is it possible to change the dimentions of the embeddings it generates?
#46 opened 2 months ago
by
sraj
Adding `safetensors` variant of this model
#45 opened 6 months ago
by
jakeyap
Best Practices for Fine-Tuning Models on Multi-Hop Datasets?
#43 opened 7 months ago
by
leonshub
How to set instruct for MRACL dataset?
1
#41 opened 8 months ago
by
Mengyao00
CUDA Memory error when using sentence transformers using Tesla V100-PCIE-32GB
2
#40 opened 8 months ago
by
al-h
How did you construct homogeneous batch if there is not enough sample to make single batch?
#38 opened 8 months ago
by
lightapple
Can anyone help on this phenomenon? Fine tuning intfloat / e5-mistral-7b-instruct got nan loss and all NaN intermediate model outputs
#37 opened 8 months ago
by
jhyapple
Use in sentence transformers
#35 opened 9 months ago
by
woofadu
Understanding memory consumption during inference
2
#34 opened 9 months ago
by
andrew-kirfman
Questions about the training data
4
#33 opened 10 months ago
by
xxxcliu
How much GPU memory is required for 32k context embedding?
1
#32 opened 10 months ago
by
Labmem009
Quantization
1
#30 opened 10 months ago
by
mrgiraffe
Train loss for reproduction
#29 opened 10 months ago
by
yjkimstats
Is there a way to do fine tuning using STS datasets?
#27 opened 11 months ago
by
ijkim
Instruction / Query document embedding question.
2
#26 opened 11 months ago
by
eek
KeyError: 'mistral' Application startup failed during deployment on Inference Endpoints
1
#24 opened 11 months ago
by
hoangsx
best way to deploy this as an API endpoint.
12
#21 opened 11 months ago
by
aigeek0x0
Error using multiple GPUs
4
#20 opened 11 months ago
by
JesusUned
Can you tell me the detail about negative sampling for benchmark data?
2
#19 opened 11 months ago
by
lightapple
Deployment fails (weight model.embed_tokens.weight does not exist) to Inference Endpoint
5
#18 opened 11 months ago
by
ts314
how to extract general text embedding
8
#17 opened 11 months ago
by
daisyyayueyue
Is it possible to convert the model to ONNX?
1
#16 opened 11 months ago
by
serdarcaglar
About max sequence length
2
#14 opened 12 months ago
by
jorisfu
Knowledge distillation into smaller model
2
#13 opened 12 months ago
by
tomaarsen
RuntimeError: weight model.embed_tokens.weight does not exist
6
#12 opened 12 months ago
by
ivankeller
When should I use this instruction for RAG retrival tasks?
5
#11 opened 12 months ago
by
kk3dmax
Hardware Requirements
4
#10 opened 12 months ago
by
Ravnoor1
Can you training a multilingual e5 based on a multilingual LLM?
3
#9 opened 12 months ago
by
hantian
symmetric tasks prompts
1
#8 opened 12 months ago
by
mayjacky
Any plan to release fine-tuning scripts?
4
#7 opened 12 months ago
by
Mengyao00
lora config
1
#6 opened 12 months ago
by
SeanLee97
Any plan to release training data?
3
#3 opened 12 months ago
by
memray
MLX example
3
#1 opened 12 months ago
by
paulmaksimovich