yosefw commited on
Commit
c0c3fcc
·
verified ·
1 Parent(s): 8b7702e

Add new SparseEncoder model

Browse files
1_SpladePooling/config.json ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ {
2
+ "pooling_strategy": "max",
3
+ "activation_function": "relu",
4
+ "word_embedding_dimension": 30522
5
+ }
README.md ADDED
@@ -0,0 +1,509 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - sentence-transformers
4
+ - sparse-encoder
5
+ - sparse
6
+ - splade
7
+ - generated_from_trainer
8
+ - dataset_size:1000000
9
+ - loss:SpladeLoss
10
+ - loss:SparseMarginMSELoss
11
+ - loss:FlopsLoss
12
+ base_model: yosefw/SPLADE-BERT-Mini-BS256
13
+ widget:
14
+ - text: Caffeine is a central nervous system stimulant. It works by stimulating the
15
+ brain. Caffeine is found naturally in foods and beverages such as coffee, tea,
16
+ colas, energy and chocolate. Botanical sources of caffeine include kola nuts,
17
+ guarana, and yerba mate.
18
+ - text: Tim Hardaway, Jr. Compared To My 5ft 10in (177cm) Height. Tim Hardaway, Jr.'s
19
+ height is 6ft 6in or 198cm while I am 5ft 10in or 177cm. I am shorter compared
20
+ to him. To find out how much shorter I am, we would have to subtract my height
21
+ from Tim Hardaway, Jr.'s height. Therefore I am shorter to him for about 21cm.
22
+ - text: benefits of honey and lemon
23
+ - text: 'How To Cook Corn on the Cob in the Microwave What You Need. Ingredients 1
24
+ or more ears fresh, un-shucked sweet corn Equipment Microwave Cooling rack or
25
+ cutting board Instructions. Place 1 to 4 ears of corn in the microwave: Arrange
26
+ 1 to 4 ears of corn, un-shucked, in the microwave. If you prefer, you can set
27
+ them on a microwaveable plate or tray. If you need to cook more than 4 ears of
28
+ corn, cook them in batches. Microwave for 3 to 5 minutes: For just 1 or 2 ears
29
+ of corn, microwave for 3 minutes. For 3 or 4 ears, microwave for 4 minutes. If
30
+ you like softer corn or if your ears are particularly large, microwave for an
31
+ additional minute.'
32
+ - text: The law recognizes two basic kinds of warrantiesimplied warranties and express
33
+ warranties. Implied Warranties. Implied warranties are unspoken, unwritten promises,
34
+ created by state law, that go from you, as a seller or merchant, to your customers.
35
+ pipeline_tag: feature-extraction
36
+ library_name: sentence-transformers
37
+ metrics:
38
+ - dot_accuracy@1
39
+ - dot_accuracy@3
40
+ - dot_accuracy@5
41
+ - dot_accuracy@10
42
+ - dot_precision@1
43
+ - dot_precision@3
44
+ - dot_precision@5
45
+ - dot_precision@10
46
+ - dot_recall@1
47
+ - dot_recall@3
48
+ - dot_recall@5
49
+ - dot_recall@10
50
+ - dot_ndcg@10
51
+ - dot_mrr@10
52
+ - dot_map@100
53
+ - query_active_dims
54
+ - query_sparsity_ratio
55
+ - corpus_active_dims
56
+ - corpus_sparsity_ratio
57
+ model-index:
58
+ - name: SPLADE Sparse Encoder
59
+ results:
60
+ - task:
61
+ type: sparse-information-retrieval
62
+ name: Sparse Information Retrieval
63
+ dataset:
64
+ name: Unknown
65
+ type: unknown
66
+ metrics:
67
+ - type: dot_accuracy@1
68
+ value: 0.5018
69
+ name: Dot Accuracy@1
70
+ - type: dot_accuracy@3
71
+ value: 0.8286
72
+ name: Dot Accuracy@3
73
+ - type: dot_accuracy@5
74
+ value: 0.9194
75
+ name: Dot Accuracy@5
76
+ - type: dot_accuracy@10
77
+ value: 0.9746
78
+ name: Dot Accuracy@10
79
+ - type: dot_precision@1
80
+ value: 0.5018
81
+ name: Dot Precision@1
82
+ - type: dot_precision@3
83
+ value: 0.2839333333333333
84
+ name: Dot Precision@3
85
+ - type: dot_precision@5
86
+ value: 0.19103999999999996
87
+ name: Dot Precision@5
88
+ - type: dot_precision@10
89
+ value: 0.10255999999999998
90
+ name: Dot Precision@10
91
+ - type: dot_recall@1
92
+ value: 0.4867666666666667
93
+ name: Dot Recall@1
94
+ - type: dot_recall@3
95
+ value: 0.81485
96
+ name: Dot Recall@3
97
+ - type: dot_recall@5
98
+ value: 0.9096166666666667
99
+ name: Dot Recall@5
100
+ - type: dot_recall@10
101
+ value: 0.9709333333333334
102
+ name: Dot Recall@10
103
+ - type: dot_ndcg@10
104
+ value: 0.7457042059559617
105
+ name: Dot Ndcg@10
106
+ - type: dot_mrr@10
107
+ value: 0.6749323809523842
108
+ name: Dot Mrr@10
109
+ - type: dot_map@100
110
+ value: 0.670785161566693
111
+ name: Dot Map@100
112
+ - type: query_active_dims
113
+ value: 22.584999084472656
114
+ name: Query Active Dims
115
+ - type: query_sparsity_ratio
116
+ value: 0.9992600419669592
117
+ name: Query Sparsity Ratio
118
+ - type: corpus_active_dims
119
+ value: 174.85202722777373
120
+ name: Corpus Active Dims
121
+ - type: corpus_sparsity_ratio
122
+ value: 0.9942712788405814
123
+ name: Corpus Sparsity Ratio
124
+ ---
125
+
126
+ # SPLADE Sparse Encoder
127
+
128
+ This is a [SPLADE Sparse Encoder](https://www.sbert.net/docs/sparse_encoder/usage/usage.html) model finetuned from [yosefw/SPLADE-BERT-Mini-BS256](https://huggingface.co/yosefw/SPLADE-BERT-Mini-BS256) using the [sentence-transformers](https://www.SBERT.net) library. It maps sentences & paragraphs to a 30522-dimensional sparse vector space and can be used for semantic search and sparse retrieval.
129
+ ## Model Details
130
+
131
+ ### Model Description
132
+ - **Model Type:** SPLADE Sparse Encoder
133
+ - **Base model:** [yosefw/SPLADE-BERT-Mini-BS256](https://huggingface.co/yosefw/SPLADE-BERT-Mini-BS256) <!-- at revision 986bc55b61d9f0559f86423fb5807b9f4a3b7094 -->
134
+ - **Maximum Sequence Length:** 512 tokens
135
+ - **Output Dimensionality:** 30522 dimensions
136
+ - **Similarity Function:** Dot Product
137
+ <!-- - **Training Dataset:** Unknown -->
138
+ <!-- - **Language:** Unknown -->
139
+ <!-- - **License:** Unknown -->
140
+
141
+ ### Model Sources
142
+
143
+ - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
144
+ - **Documentation:** [Sparse Encoder Documentation](https://www.sbert.net/docs/sparse_encoder/usage/usage.html)
145
+ - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
146
+ - **Hugging Face:** [Sparse Encoders on Hugging Face](https://huggingface.co/models?library=sentence-transformers&other=sparse-encoder)
147
+
148
+ ### Full Model Architecture
149
+
150
+ ```
151
+ SparseEncoder(
152
+ (0): MLMTransformer({'max_seq_length': 512, 'do_lower_case': False, 'architecture': 'BertForMaskedLM'})
153
+ (1): SpladePooling({'pooling_strategy': 'max', 'activation_function': 'relu', 'word_embedding_dimension': 30522})
154
+ )
155
+ ```
156
+
157
+ ## Usage
158
+
159
+ ### Direct Usage (Sentence Transformers)
160
+
161
+ First install the Sentence Transformers library:
162
+
163
+ ```bash
164
+ pip install -U sentence-transformers
165
+ ```
166
+
167
+ Then you can load this model and run inference.
168
+ ```python
169
+ from sentence_transformers import SparseEncoder
170
+
171
+ # Download from the 🤗 Hub
172
+ model = SparseEncoder("yosefw/SPLADE-BERT-Mini-BS256-distil")
173
+ # Run inference
174
+ queries = [
175
+ "common law implied warranty",
176
+ ]
177
+ documents = [
178
+ 'The law recognizes two basic kinds of warrantiesimplied warranties and express warranties. Implied Warranties. Implied warranties are unspoken, unwritten promises, created by state law, that go from you, as a seller or merchant, to your customers.',
179
+ 'An implied warranty is a contract law term for certain assurances that are presumed in the sale of products or real property.',
180
+ 'The implied warranty of fitness for a particular purpose is a promise that the law says you, as a seller, make when your customer relies on your advice that a product can be used for some specific purpose.',
181
+ ]
182
+ query_embeddings = model.encode_query(queries)
183
+ document_embeddings = model.encode_document(documents)
184
+ print(query_embeddings.shape, document_embeddings.shape)
185
+ # [1, 30522] [3, 30522]
186
+
187
+ # Get the similarity scores for the embeddings
188
+ similarities = model.similarity(query_embeddings, document_embeddings)
189
+ print(similarities)
190
+ # tensor([[22.4810, 22.8282, 21.7472]])
191
+ ```
192
+
193
+ <!--
194
+ ### Direct Usage (Transformers)
195
+
196
+ <details><summary>Click to see the direct usage in Transformers</summary>
197
+
198
+ </details>
199
+ -->
200
+
201
+ <!--
202
+ ### Downstream Usage (Sentence Transformers)
203
+
204
+ You can finetune this model on your own dataset.
205
+
206
+ <details><summary>Click to expand</summary>
207
+
208
+ </details>
209
+ -->
210
+
211
+ <!--
212
+ ### Out-of-Scope Use
213
+
214
+ *List how the model may foreseeably be misused and address what users ought not to do with the model.*
215
+ -->
216
+
217
+ ## Evaluation
218
+
219
+ ### Metrics
220
+
221
+ #### Sparse Information Retrieval
222
+
223
+ * Evaluated with [<code>SparseInformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseInformationRetrievalEvaluator)
224
+
225
+ | Metric | Value |
226
+ |:----------------------|:-----------|
227
+ | dot_accuracy@1 | 0.5018 |
228
+ | dot_accuracy@3 | 0.8286 |
229
+ | dot_accuracy@5 | 0.9194 |
230
+ | dot_accuracy@10 | 0.9746 |
231
+ | dot_precision@1 | 0.5018 |
232
+ | dot_precision@3 | 0.2839 |
233
+ | dot_precision@5 | 0.191 |
234
+ | dot_precision@10 | 0.1026 |
235
+ | dot_recall@1 | 0.4868 |
236
+ | dot_recall@3 | 0.8148 |
237
+ | dot_recall@5 | 0.9096 |
238
+ | dot_recall@10 | 0.9709 |
239
+ | **dot_ndcg@10** | **0.7457** |
240
+ | dot_mrr@10 | 0.6749 |
241
+ | dot_map@100 | 0.6708 |
242
+ | query_active_dims | 22.585 |
243
+ | query_sparsity_ratio | 0.9993 |
244
+ | corpus_active_dims | 174.852 |
245
+ | corpus_sparsity_ratio | 0.9943 |
246
+
247
+ <!--
248
+ ## Bias, Risks and Limitations
249
+
250
+ *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
251
+ -->
252
+
253
+ <!--
254
+ ### Recommendations
255
+
256
+ *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
257
+ -->
258
+
259
+ ## Training Details
260
+
261
+ ### Training Dataset
262
+
263
+ #### Unnamed Dataset
264
+
265
+ * Size: 1,000,000 training samples
266
+ * Columns: <code>query</code>, <code>positive</code>, <code>negative_1</code>, <code>negative_2</code>, and <code>label</code>
267
+ * Approximate statistics based on the first 1000 samples:
268
+ | | query | positive | negative_1 | negative_2 | label |
269
+ |:--------|:---------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------|
270
+ | type | string | string | string | string | list |
271
+ | details | <ul><li>min: 4 tokens</li><li>mean: 9.01 tokens</li><li>max: 29 tokens</li></ul> | <ul><li>min: 22 tokens</li><li>mean: 80.48 tokens</li><li>max: 247 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 79.27 tokens</li><li>max: 213 tokens</li></ul> | <ul><li>min: 17 tokens</li><li>mean: 75.56 tokens</li><li>max: 190 tokens</li></ul> | <ul><li>size: 2 elements</li></ul> |
272
+ * Samples:
273
+ | query | positive | negative_1 | negative_2 | label |
274
+ |:-----------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------|
275
+ | <code>friendly home health care</code> | <code>Medicare Evaluation of the Quality of Care. The quality of care given at Friendly Care Home Health Services is periodically evaluated by Medicare. The results of the most recent evaluation period are listed below to help you compare home care agencies in your area. More Info.</code> | <code>Every participant took the same survey so it is a useful way to compare Friendly Care Home Health Services to other home care agencies.</code> | <code>It covers a wide range of services and can often delay the need for long-term nursing home care. More specifically, home health care may include occupational and physical therapy, speech therapy, and even skilled nursing.</code> | <code>[1.2647171020507812, 9.144136428833008]</code> |
276
+ | <code>how much does the xbox elite controller weigh</code> | <code>How much does an Xbox 360 weigh? A: The weight of an Xbox 360 depends on the different model purchased, with an original Xbox 360 or Xbox 360 Elite weighing 7.7 pounds with a hard drive and a newer Xbox 360 Slim weighing 6.3 pounds. An Xbox 360 without a hard drive weighs 7 pounds.</code> | <code>How much does 6 xbox 360 games/cases weigh? How much does an xbox 360 elite weigh (in the box)? How much does an xbox 360 weigh? im going to fedex one? I am considering purchasing an Xbox 360, or a Playstation 3...</code> | <code>1 You can only upload videos smaller than 600 MB. 2 You can only upload a photo (png, jpg, jpeg) or video (3gp, 3gpp, mp4, mov, avi, mpg, mpeg, rm). 3 You can only upload a photo or video. Video should be smaller than <b>600 MB/5 minutes</b>.</code> | <code>[4.903870582580566, 18.162578582763672]</code> |
277
+ | <code>what county is norfolk, ct in</code> | <code>Norfolk, Connecticut. Norfolk (local /ˈnɔːrfɔːrk/) is a town in Litchfield County, Connecticut, United States. The population was 1,787 at the 2010 census.</code> | <code>Norfolk Historic District. The Norfolk Historic District was listed on the National Register of Historic Places in 1979. Portions of the content on this web page were adapted from a copy of the original nomination document. [†] Adaptation copyright © 2010, The Gombach Group. Description.</code> | <code>Terms begin the first day of the month. Grand Juries, 1st and 3rd Wednesday of each month. Civil cases set by agreement of counsel and consent of the court; scheduling orders are mandatory in most cases. Civil and Criminal trials begin at 9:30 a.m.</code> | <code>[12.4237699508667, 21.46290397644043]</code> |
278
+ * Loss: [<code>SpladeLoss</code>](https://sbert.net/docs/package_reference/sparse_encoder/losses.html#spladeloss) with these parameters:
279
+ ```json
280
+ {
281
+ "loss": "SparseMarginMSELoss",
282
+ "document_regularizer_weight": 0.12,
283
+ "query_regularizer_weight": 0.2
284
+ }
285
+ ```
286
+
287
+ ### Training Hyperparameters
288
+ #### Non-Default Hyperparameters
289
+
290
+ - `eval_strategy`: epoch
291
+ - `per_device_train_batch_size`: 64
292
+ - `per_device_eval_batch_size`: 64
293
+ - `learning_rate`: 4e-05
294
+ - `num_train_epochs`: 4
295
+ - `lr_scheduler_type`: cosine
296
+ - `warmup_ratio`: 0.025
297
+ - `fp16`: True
298
+ - `load_best_model_at_end`: True
299
+ - `optim`: adamw_torch_fused
300
+
301
+ #### All Hyperparameters
302
+ <details><summary>Click to expand</summary>
303
+
304
+ - `overwrite_output_dir`: False
305
+ - `do_predict`: False
306
+ - `eval_strategy`: epoch
307
+ - `prediction_loss_only`: True
308
+ - `per_device_train_batch_size`: 64
309
+ - `per_device_eval_batch_size`: 64
310
+ - `per_gpu_train_batch_size`: None
311
+ - `per_gpu_eval_batch_size`: None
312
+ - `gradient_accumulation_steps`: 1
313
+ - `eval_accumulation_steps`: None
314
+ - `torch_empty_cache_steps`: None
315
+ - `learning_rate`: 4e-05
316
+ - `weight_decay`: 0.0
317
+ - `adam_beta1`: 0.9
318
+ - `adam_beta2`: 0.999
319
+ - `adam_epsilon`: 1e-08
320
+ - `max_grad_norm`: 1.0
321
+ - `num_train_epochs`: 4
322
+ - `max_steps`: -1
323
+ - `lr_scheduler_type`: cosine
324
+ - `lr_scheduler_kwargs`: {}
325
+ - `warmup_ratio`: 0.025
326
+ - `warmup_steps`: 0
327
+ - `log_level`: passive
328
+ - `log_level_replica`: warning
329
+ - `log_on_each_node`: True
330
+ - `logging_nan_inf_filter`: True
331
+ - `save_safetensors`: True
332
+ - `save_on_each_node`: False
333
+ - `save_only_model`: False
334
+ - `restore_callback_states_from_checkpoint`: False
335
+ - `no_cuda`: False
336
+ - `use_cpu`: False
337
+ - `use_mps_device`: False
338
+ - `seed`: 42
339
+ - `data_seed`: None
340
+ - `jit_mode_eval`: False
341
+ - `use_ipex`: False
342
+ - `bf16`: False
343
+ - `fp16`: True
344
+ - `fp16_opt_level`: O1
345
+ - `half_precision_backend`: auto
346
+ - `bf16_full_eval`: False
347
+ - `fp16_full_eval`: False
348
+ - `tf32`: None
349
+ - `local_rank`: 0
350
+ - `ddp_backend`: None
351
+ - `tpu_num_cores`: None
352
+ - `tpu_metrics_debug`: False
353
+ - `debug`: []
354
+ - `dataloader_drop_last`: False
355
+ - `dataloader_num_workers`: 0
356
+ - `dataloader_prefetch_factor`: None
357
+ - `past_index`: -1
358
+ - `disable_tqdm`: False
359
+ - `remove_unused_columns`: True
360
+ - `label_names`: None
361
+ - `load_best_model_at_end`: True
362
+ - `ignore_data_skip`: False
363
+ - `fsdp`: []
364
+ - `fsdp_min_num_params`: 0
365
+ - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
366
+ - `fsdp_transformer_layer_cls_to_wrap`: None
367
+ - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
368
+ - `deepspeed`: None
369
+ - `label_smoothing_factor`: 0.0
370
+ - `optim`: adamw_torch_fused
371
+ - `optim_args`: None
372
+ - `adafactor`: False
373
+ - `group_by_length`: False
374
+ - `length_column_name`: length
375
+ - `ddp_find_unused_parameters`: None
376
+ - `ddp_bucket_cap_mb`: None
377
+ - `ddp_broadcast_buffers`: False
378
+ - `dataloader_pin_memory`: True
379
+ - `dataloader_persistent_workers`: False
380
+ - `skip_memory_metrics`: True
381
+ - `use_legacy_prediction_loop`: False
382
+ - `push_to_hub`: False
383
+ - `resume_from_checkpoint`: None
384
+ - `hub_model_id`: None
385
+ - `hub_strategy`: every_save
386
+ - `hub_private_repo`: None
387
+ - `hub_always_push`: False
388
+ - `hub_revision`: None
389
+ - `gradient_checkpointing`: False
390
+ - `gradient_checkpointing_kwargs`: None
391
+ - `include_inputs_for_metrics`: False
392
+ - `include_for_metrics`: []
393
+ - `eval_do_concat_batches`: True
394
+ - `fp16_backend`: auto
395
+ - `push_to_hub_model_id`: None
396
+ - `push_to_hub_organization`: None
397
+ - `mp_parameters`:
398
+ - `auto_find_batch_size`: False
399
+ - `full_determinism`: False
400
+ - `torchdynamo`: None
401
+ - `ray_scope`: last
402
+ - `ddp_timeout`: 1800
403
+ - `torch_compile`: False
404
+ - `torch_compile_backend`: None
405
+ - `torch_compile_mode`: None
406
+ - `include_tokens_per_second`: False
407
+ - `include_num_input_tokens_seen`: False
408
+ - `neftune_noise_alpha`: None
409
+ - `optim_target_modules`: None
410
+ - `batch_eval_metrics`: False
411
+ - `eval_on_start`: False
412
+ - `use_liger_kernel`: False
413
+ - `liger_kernel_config`: None
414
+ - `eval_use_gather_object`: False
415
+ - `average_tokens_across_devices`: False
416
+ - `prompts`: None
417
+ - `batch_sampler`: batch_sampler
418
+ - `multi_dataset_batch_sampler`: proportional
419
+ - `router_mapping`: {}
420
+ - `learning_rate_mapping`: {}
421
+
422
+ </details>
423
+
424
+ ### Training Logs
425
+ | Epoch | Step | Training Loss | dot_ndcg@10 |
426
+ |:-----:|:-----:|:-------------:|:-----------:|
427
+ | 1.0 | 15625 | 9.3147 | 0.7353 |
428
+ | 2.0 | 31250 | 7.5267 | 0.7429 |
429
+ | 3.0 | 46875 | 6.3289 | 0.7457 |
430
+
431
+
432
+ ### Framework Versions
433
+ - Python: 3.11.13
434
+ - Sentence Transformers: 5.0.0
435
+ - Transformers: 4.53.3
436
+ - PyTorch: 2.6.0+cu124
437
+ - Accelerate: 1.9.0
438
+ - Datasets: 4.0.0
439
+ - Tokenizers: 0.21.2
440
+
441
+ ## Citation
442
+
443
+ ### BibTeX
444
+
445
+ #### Sentence Transformers
446
+ ```bibtex
447
+ @inproceedings{reimers-2019-sentence-bert,
448
+ title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
449
+ author = "Reimers, Nils and Gurevych, Iryna",
450
+ booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
451
+ month = "11",
452
+ year = "2019",
453
+ publisher = "Association for Computational Linguistics",
454
+ url = "https://arxiv.org/abs/1908.10084",
455
+ }
456
+ ```
457
+
458
+ #### SpladeLoss
459
+ ```bibtex
460
+ @misc{formal2022distillationhardnegativesampling,
461
+ title={From Distillation to Hard Negative Sampling: Making Sparse Neural IR Models More Effective},
462
+ author={Thibault Formal and Carlos Lassance and Benjamin Piwowarski and Stéphane Clinchant},
463
+ year={2022},
464
+ eprint={2205.04733},
465
+ archivePrefix={arXiv},
466
+ primaryClass={cs.IR},
467
+ url={https://arxiv.org/abs/2205.04733},
468
+ }
469
+ ```
470
+
471
+ #### SparseMarginMSELoss
472
+ ```bibtex
473
+ @misc{hofstätter2021improving,
474
+ title={Improving Efficient Neural Ranking Models with Cross-Architecture Knowledge Distillation},
475
+ author={Sebastian Hofstätter and Sophia Althammer and Michael Schröder and Mete Sertkan and Allan Hanbury},
476
+ year={2021},
477
+ eprint={2010.02666},
478
+ archivePrefix={arXiv},
479
+ primaryClass={cs.IR}
480
+ }
481
+ ```
482
+
483
+ #### FlopsLoss
484
+ ```bibtex
485
+ @article{paria2020minimizing,
486
+ title={Minimizing flops to learn efficient sparse representations},
487
+ author={Paria, Biswajit and Yeh, Chih-Kuan and Yen, Ian EH and Xu, Ning and Ravikumar, Pradeep and P{'o}czos, Barnab{'a}s},
488
+ journal={arXiv preprint arXiv:2004.05665},
489
+ year={2020}
490
+ }
491
+ ```
492
+
493
+ <!--
494
+ ## Glossary
495
+
496
+ *Clearly define terms in order to be accessible across audiences.*
497
+ -->
498
+
499
+ <!--
500
+ ## Model Card Authors
501
+
502
+ *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
503
+ -->
504
+
505
+ <!--
506
+ ## Model Card Contact
507
+
508
+ *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
509
+ -->
config.json ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "BertForMaskedLM"
4
+ ],
5
+ "attention_probs_dropout_prob": 0.1,
6
+ "classifier_dropout": null,
7
+ "hidden_act": "gelu",
8
+ "hidden_dropout_prob": 0.1,
9
+ "hidden_size": 256,
10
+ "initializer_range": 0.02,
11
+ "intermediate_size": 1024,
12
+ "layer_norm_eps": 1e-12,
13
+ "max_position_embeddings": 512,
14
+ "model_type": "bert",
15
+ "num_attention_heads": 4,
16
+ "num_hidden_layers": 4,
17
+ "pad_token_id": 0,
18
+ "position_embedding_type": "absolute",
19
+ "torch_dtype": "float32",
20
+ "transformers_version": "4.53.3",
21
+ "type_vocab_size": 2,
22
+ "use_cache": true,
23
+ "vocab_size": 30522
24
+ }
config_sentence_transformers.json ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "model_type": "SparseEncoder",
3
+ "__version__": {
4
+ "sentence_transformers": "5.0.0",
5
+ "transformers": "4.53.3",
6
+ "pytorch": "2.6.0+cu124"
7
+ },
8
+ "prompts": {
9
+ "query": "",
10
+ "document": ""
11
+ },
12
+ "default_prompt_name": null,
13
+ "similarity_fn_name": "dot"
14
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fd663f83e4f500c3e9c7a5bf6c1498fbd225ec75e7f7ee38006f1889d5db9d00
3
+ size 44814856
modules.json ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.sparse_encoder.models.MLMTransformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_SpladePooling",
12
+ "type": "sentence_transformers.sparse_encoder.models.SpladePooling"
13
+ }
14
+ ]
sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 512,
3
+ "do_lower_case": false
4
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": {
3
+ "content": "[CLS]",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "mask_token": {
10
+ "content": "[MASK]",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": {
17
+ "content": "[PAD]",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "sep_token": {
24
+ "content": "[SEP]",
25
+ "lstrip": false,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ },
30
+ "unk_token": {
31
+ "content": "[UNK]",
32
+ "lstrip": false,
33
+ "normalized": false,
34
+ "rstrip": false,
35
+ "single_word": false
36
+ }
37
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,65 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "[PAD]",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "100": {
12
+ "content": "[UNK]",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "101": {
20
+ "content": "[CLS]",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "102": {
28
+ "content": "[SEP]",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "103": {
36
+ "content": "[MASK]",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ }
43
+ },
44
+ "clean_up_tokenization_spaces": true,
45
+ "cls_token": "[CLS]",
46
+ "do_basic_tokenize": true,
47
+ "do_lower_case": true,
48
+ "extra_special_tokens": {},
49
+ "mask_token": "[MASK]",
50
+ "max_length": 512,
51
+ "model_max_length": 1000000000000000019884624838656,
52
+ "never_split": null,
53
+ "pad_to_multiple_of": null,
54
+ "pad_token": "[PAD]",
55
+ "pad_token_type_id": 0,
56
+ "padding_side": "right",
57
+ "sep_token": "[SEP]",
58
+ "stride": 0,
59
+ "strip_accents": null,
60
+ "tokenize_chinese_chars": true,
61
+ "tokenizer_class": "BertTokenizer",
62
+ "truncation_side": "right",
63
+ "truncation_strategy": "longest_first",
64
+ "unk_token": "[UNK]"
65
+ }
vocab.txt ADDED
The diff for this file is too large to render. See raw diff