calmodovar23 commited on
Commit
0c13e32
1 Parent(s): cd41d66

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +222 -0
README.md ADDED
@@ -0,0 +1,222 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: apache-2.0
4
+ language:
5
+ - en
6
+ tags:
7
+ - reranker
8
+ - cross-encoder
9
+ - transformers.js
10
+ ---
11
+
12
+ ---
13
+ <br><br>
14
+
15
+ # jina-reranker-v1-turbo-en-onnx
16
+
17
+ This repo was forked from the **jinaai/jina-reranker-v1-turbo-en** model and contains only the ONNX version of the model. Below is the original model card from the source repo.
18
+
19
+ ---
20
+
21
+ <p align="center">
22
+ <img src="https://aeiljuispo.cloudimg.io/v7/https://cdn-uploads.huggingface.co/production/uploads/603763514de52ff951d89793/AFoybzd5lpBQXEBrQHuTt.png?w=200&h=200&f=face" alt="Finetuner logo: Finetuner helps you to create experiments in order to improve embeddings on search tasks. It accompanies you to deliver the last mile of performance-tuning for neural search applications." width="150px">
23
+ </p>
24
+
25
+ <p align="center">
26
+ <b>Trained by <a href="https://jina.ai/"><b>Jina AI</b></a>.</b>
27
+ </p>
28
+
29
+ # jina-reranker-v1-turbo-en
30
+
31
+ This model is designed for **blazing-fast** reranking while maintaining **competitive performance**. What's more, it leverages the power of our [JinaBERT](https://arxiv.org/abs/2310.19923) model as its foundation. `JinaBERT` itself is a unique variant of the BERT architecture that supports the symmetric bidirectional variant of [ALiBi](https://arxiv.org/abs/2108.12409). This allows `jina-reranker-v1-turbo-en` to process significantly longer sequences of text compared to other reranking models, up to an impressive **8,192** tokens.
32
+
33
+ To achieve the remarkable speed, the `jina-reranker-v1-turbo-en` employ a technique called knowledge distillation. Here, a complex, but slower, model (like our original [jina-reranker-v1-base-en](https://jina.ai/reranker/)) acts as a teacher, condensing its knowledge into a smaller, faster student model. This student retains most of the teacher's knowledge, allowing it to deliver similar accuracy in a fraction of the time.
34
+
35
+ Here's a breakdown of the reranker models we provide:
36
+
37
+ | Model Name | Layers | Hidden Size | Parameters (Millions) |
38
+ | ------------------------------------------------------------------------------------ | ------ | ----------- | --------------------- |
39
+ | [jina-reranker-v1-base-en](https://jina.ai/reranker/) | 12 | 768 | 137.0 |
40
+ | [jina-reranker-v1-turbo-en](https://huggingface.co/jinaai/jina-reranker-v1-turbo-en) | 6 | 384 | 37.8 |
41
+ | [jina-reranker-v1-tiny-en](https://huggingface.co/jinaai/jina-reranker-v1-tiny-en) | 4 | 384 | 33.0 |
42
+
43
+ > Currently, the `jina-reranker-v1-base-en` model is not available on Hugging Face. You can access it via the [Jina AI Reranker API](https://jina.ai/reranker/).
44
+
45
+ As you can see, the `jina-reranker-v1-turbo-en` offers a balanced approach with **6 layers** and **37.8 million** parameters. This translates to fast search and reranking while preserving a high degree of accuracy. The `jina-reranker-v1-tiny-en` prioritizes speed even further, achieving the fastest inference speeds with its **4-layer**, **33.0 million** parameter architecture. This makes it ideal for scenarios where absolute top accuracy is less crucial.
46
+
47
+ # Usage
48
+
49
+ 1. The easiest way to starting using `jina-reranker-v1-turbo-en` is to use Jina AI's [Reranker API](https://jina.ai/reranker/).
50
+
51
+ ```bash
52
+ curl https://api.jina.ai/v1/rerank \
53
+ -H "Content-Type: application/json" \
54
+ -H "Authorization: Bearer YOUR_API_KEY" \
55
+ -d '{
56
+ "model": "jina-reranker-v1-turbo-en",
57
+ "query": "Organic skincare products for sensitive skin",
58
+ "documents": [
59
+ "Eco-friendly kitchenware for modern homes",
60
+ "Biodegradable cleaning supplies for eco-conscious consumers",
61
+ "Organic cotton baby clothes for sensitive skin",
62
+ "Natural organic skincare range for sensitive skin",
63
+ "Tech gadgets for smart homes: 2024 edition",
64
+ "Sustainable gardening tools and compost solutions",
65
+ "Sensitive skin-friendly facial cleansers and toners",
66
+ "Organic food wraps and storage solutions",
67
+ "All-natural pet food for dogs with allergies",
68
+ "Yoga mats made from recycled materials"
69
+ ],
70
+ "top_n": 3
71
+ }'
72
+ ```
73
+
74
+ 2. Alternatively, you can use the latest version of the `sentence-transformers>=0.27.0` library. You can install it via pip:
75
+
76
+ ```bash
77
+ pip install -U sentence-transformers
78
+ ```
79
+
80
+ Then, you can use the following code to interact with the model:
81
+
82
+ ```python
83
+ from sentence_transformers import CrossEncoder
84
+
85
+ # Load the model, here we use our turbo sized model
86
+ model = CrossEncoder("jinaai/jina-reranker-v1-turbo-en", trust_remote_code=True)
87
+
88
+ # Example query and documents
89
+ query = "Organic skincare products for sensitive skin"
90
+ documents = [
91
+ "Eco-friendly kitchenware for modern homes",
92
+ "Biodegradable cleaning supplies for eco-conscious consumers",
93
+ "Organic cotton baby clothes for sensitive skin",
94
+ "Natural organic skincare range for sensitive skin",
95
+ "Tech gadgets for smart homes: 2024 edition",
96
+ "Sustainable gardening tools and compost solutions",
97
+ "Sensitive skin-friendly facial cleansers and toners",
98
+ "Organic food wraps and storage solutions",
99
+ "All-natural pet food for dogs with allergies",
100
+ "Yoga mats made from recycled materials"
101
+ ]
102
+
103
+ results = model.rank(query, documents, return_documents=True, top_k=3)
104
+ ```
105
+
106
+ 3. You can also use the `transformers` library to interact with the model programmatically.
107
+
108
+ ```python
109
+ !pip install transformers
110
+ from transformers import AutoModelForSequenceClassification
111
+
112
+ model = AutoModelForSequenceClassification.from_pretrained(
113
+ 'jinaai/jina-reranker-v1-turbo-en', num_labels=1, trust_remote_code=True
114
+ )
115
+
116
+ # Example query and documents
117
+ query = "Organic skincare products for sensitive skin"
118
+ documents = [
119
+ "Eco-friendly kitchenware for modern homes",
120
+ "Biodegradable cleaning supplies for eco-conscious consumers",
121
+ "Organic cotton baby clothes for sensitive skin",
122
+ "Natural organic skincare range for sensitive skin",
123
+ "Tech gadgets for smart homes: 2024 edition",
124
+ "Sustainable gardening tools and compost solutions",
125
+ "Sensitive skin-friendly facial cleansers and toners",
126
+ "Organic food wraps and storage solutions",
127
+ "All-natural pet food for dogs with allergies",
128
+ "Yoga mats made from recycled materials"
129
+ ]
130
+
131
+ # construct sentence pairs
132
+ sentence_pairs = [[query, doc] for doc in documents]
133
+
134
+ scores = model.compute_score(sentence_pairs)
135
+ ```
136
+
137
+ 4. You can also use the `transformers.js` library to run the model directly in JavaScript (in-browser, Node.js, Deno, etc.)!
138
+
139
+ If you haven't already, you can install the [Transformers.js](https://huggingface.co/docs/transformers.js) JavaScript library from [NPM](https://www.npmjs.com/package/@xenova/transformers) using:
140
+ ```bash
141
+ npm i @xenova/transformers
142
+ ```
143
+
144
+ Then, you can use the following code to interact with the model:
145
+ ```js
146
+ import { AutoTokenizer, AutoModelForSequenceClassification } from '@xenova/transformers';
147
+
148
+ const model_id = 'jinaai/jina-reranker-v1-turbo-en';
149
+ const model = await AutoModelForSequenceClassification.from_pretrained(model_id, { quantized: false });
150
+ const tokenizer = await AutoTokenizer.from_pretrained(model_id);
151
+
152
+ /**
153
+ * Performs ranking with the CrossEncoder on the given query and documents. Returns a sorted list with the document indices and scores.
154
+ * @param {string} query A single query
155
+ * @param {string[]} documents A list of documents
156
+ * @param {Object} options Options for ranking
157
+ * @param {number} [options.top_k=undefined] Return the top-k documents. If undefined, all documents are returned.
158
+ * @param {number} [options.return_documents=false] If true, also returns the documents. If false, only returns the indices and scores.
159
+ */
160
+ async function rank(query, documents, {
161
+ top_k = undefined,
162
+ return_documents = false,
163
+ } = {}) {
164
+ const inputs = tokenizer(
165
+ new Array(documents.length).fill(query),
166
+ { text_pair: documents, padding: true, truncation: true }
167
+ )
168
+ const { logits } = await model(inputs);
169
+ return logits.sigmoid().tolist()
170
+ .map(([score], i) => ({
171
+ corpus_id: i,
172
+ score,
173
+ ...(return_documents ? { text: documents[i] } : {})
174
+ })).sort((a, b) => b.score - a.score).slice(0, top_k);
175
+ }
176
+
177
+ // Example usage:
178
+ const query = "Organic skincare products for sensitive skin"
179
+ const documents = [
180
+ "Eco-friendly kitchenware for modern homes",
181
+ "Biodegradable cleaning supplies for eco-conscious consumers",
182
+ "Organic cotton baby clothes for sensitive skin",
183
+ "Natural organic skincare range for sensitive skin",
184
+ "Tech gadgets for smart homes: 2024 edition",
185
+ "Sustainable gardening tools and compost solutions",
186
+ "Sensitive skin-friendly facial cleansers and toners",
187
+ "Organic food wraps and storage solutions",
188
+ "All-natural pet food for dogs with allergies",
189
+ "Yoga mats made from recycled materials",
190
+ ]
191
+
192
+ const results = await rank(query, documents, { return_documents: true, top_k: 3 });
193
+ console.log(results);
194
+ ```
195
+
196
+ That's it! You can now use the `jina-reranker-v1-turbo-en` model in your projects.
197
+
198
+ # Evaluation
199
+
200
+ We evaluated Jina Reranker on 3 key benchmarks to ensure top-tier performance and search relevance.
201
+
202
+ | Model Name | NDCG@10 (17 BEIR datasets) | NDCG@10 (5 LoCo datasets) | Hit Rate (LlamaIndex RAG) |
203
+ | ------------------------------------------- | -------------------------- | ------------------------- | ------------------------- |
204
+ | `jina-reranker-v1-base-en` | **52.45** | **87.31** | **85.53** |
205
+ | `jina-reranker-v1-turbo-en` (you are here) | **49.60** | **69.21** | **85.13** |
206
+ | `jina-reranker-v1-tiny-en` | **48.54** | **70.29** | **85.00** |
207
+ | `mxbai-rerank-base-v1` | 49.19 | - | 82.50 |
208
+ | `mxbai-rerank-xsmall-v1` | 48.80 | - | 83.69 |
209
+ | `ms-marco-MiniLM-L-6-v2` | 48.64 | - | 82.63 |
210
+ | `ms-marco-MiniLM-L-4-v2` | 47.81 | - | 83.82 |
211
+ | `bge-reranker-base` | 47.89 | - | 83.03 |
212
+
213
+ **Note:**
214
+
215
+ - `NDCG@10` is a measure of ranking quality, with higher scores indicating better search results. `Hit Rate` measures the percentage of relevant documents that appear in the top 10 search results.
216
+ - The results of LoCo datasets on other models are not available since they **do not support** long documents more than 512 tokens.
217
+
218
+ For more details, please refer to our [benchmarking sheets](https://docs.google.com/spreadsheets/d/1V8pZjENdBBqrKMzZzOWc2aL60wtnR0yrEBY3urfO5P4/edit?usp=sharing).
219
+
220
+ # Contact
221
+
222
+ Join our [Discord community](https://discord.jina.ai/) and chat with other community members about ideas.