Instructions to use Y-Research-Group/CSRv2-retrieval with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- sentence-transformers
How to use Y-Research-Group/CSRv2-retrieval with sentence-transformers:
from sentence_transformers import SentenceTransformer model = SentenceTransformer("Y-Research-Group/CSRv2-retrieval") sentences = [ "The weather is lovely today.", "It's so sunny outside!", "He drove to the stadium." ] embeddings = model.encode(sentences) similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] - Notebooks
- Google Colab
- Kaggle
/home/ubuntu/Project/retrieval_projects/csr-v2/CSR_train_and_inference/codebase_for_release/Qwen3-Embedding-4B-retrieval-finetune
#1
by yversley-ebay - opened
The adapter config mentions a path to Qwen3-Embedding-4B-retrieval-finetune - is this the vanilla Qwen3 embedding model? If not, where can it be found?
Yes, this is based on the vanilla Qwen3-4B model (https://huggingface.co/Qwen/Qwen3-Embedding-4B). This path points to the model to be finetuned on and the adapter_model.safetensors file contains the LoRA fine-tuning weights, just as our other models, such as CSRv2-classification (https://huggingface.co/Y-Research-Group/CSRv2-classification). Thank you for your comment and we will modify the config file soon.