Text Ranking
Transformers
Safetensors
sentence-transformers
English
Chinese
multilingual
qwen3_5_text
text-generation
reranker
retrieval
rag
agentic-search
qwen3.5
Instructions to use infgrad/Prism-Qwen3.5-Reranker-9B with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use infgrad/Prism-Qwen3.5-Reranker-9B with Transformers:
# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("infgrad/Prism-Qwen3.5-Reranker-9B") model = AutoModelForCausalLM.from_pretrained("infgrad/Prism-Qwen3.5-Reranker-9B") - sentence-transformers
How to use infgrad/Prism-Qwen3.5-Reranker-9B with sentence-transformers:
from sentence_transformers import CrossEncoder model = CrossEncoder("infgrad/Prism-Qwen3.5-Reranker-9B") query = "Which planet is known as the Red Planet?" passages = [ "Venus is often called Earth's twin because of its similar size and proximity.", "Mars, known for its reddish appearance, is often referred to as the Red Planet.", "Jupiter, the largest planet in our solar system, has a prominent red spot.", "Saturn, famous for its rings, is sometimes mistaken for the Red Planet." ] scores = model.predict([(query, passage) for passage in passages]) print(scores) - Notebooks
- Google Colab
- Kaggle
| [ | |
| { | |
| "idx": 0, | |
| "name": "0", | |
| "path": "", | |
| "type": "sentence_transformers.base.modules.transformer.Transformer" | |
| }, | |
| { | |
| "idx": 1, | |
| "name": "1", | |
| "path": "1_LogitScore", | |
| "type": "sentence_transformers.cross_encoder.modules.logit_score.LogitScore" | |
| } | |
| ] | |