Spaces:
Running
on
CPU Upgrade
New GTE model, Apply for refreshing the results
We submitted a new model, Alibaba-NLP/gte-Qwen2-7B-instruct, training based on the latest opensource Qwen2-7B LLM. Could you please refresh the space?
Thanks!
Updated; congrats - very impressive 🙌🙌🙌
Big congratulations! I quite enjoy the Qwen2 like of models, and their power is once again visible here with the large jump in performance without any change in training strategy between 1.5 and 2: 67.34 -> 70.24 for English and 69.56 -> 72.05 for Chinese (and equivalently large jumps for each of the tasks, e.g. Retrieval).
I also quite enjoy that I can use it out of the box with Sentence Transformers/LangChain/LlamaIndex etc., well done!
- Tom Aarsen
Updated; congrats - very impressive 🙌🙌🙌
Hi,the dimension of gte-Qwen2-7B-instruct is 3584,but is shown as 4096 for MTEB/C-MTEB,where to modify it?
It is fetched from here: https://huggingface.co/Alibaba-NLP/gte-Qwen2-7B-instruct/blob/main/1_Pooling/config.json#L2
If you fix it there, it will be fixed in the LB