gubartz commited on
Commit
c3eb07b
1 Parent(s): 521e462

Add new SentenceTransformer model.

Browse files
1_Pooling/config.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "word_embedding_dimension": 384,
3
+ "pooling_mode_cls_token": false,
4
+ "pooling_mode_mean_tokens": true,
5
+ "pooling_mode_max_tokens": false,
6
+ "pooling_mode_mean_sqrt_len_tokens": false,
7
+ "pooling_mode_weightedmean_tokens": false,
8
+ "pooling_mode_lasttoken": false,
9
+ "include_prompt": true
10
+ }
README.md ADDED
@@ -0,0 +1,457 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: []
3
+ library_name: sentence-transformers
4
+ tags:
5
+ - sentence-transformers
6
+ - sentence-similarity
7
+ - feature-extraction
8
+ - dataset_size:1M<n<10M
9
+ - loss:TripletLoss
10
+ base_model: sentence-transformers/paraphrase-MiniLM-L12-v2
11
+ metrics:
12
+ - cosine_accuracy
13
+ - dot_accuracy
14
+ - manhattan_accuracy
15
+ - euclidean_accuracy
16
+ - max_accuracy
17
+ widget:
18
+ - source_sentence: 'method: Making reflective work practices visible'
19
+ sentences:
20
+ - 'method: Job quality takes into account both wage and non-wage attributes of a
21
+ job.'
22
+ - 'purpose: There could therefore be rank differences in the leadership behavioural
23
+ patterns of managers.'
24
+ - 'negative: SN has a positive effect on the user''s intention to use toward the
25
+ SNS.'
26
+ - source_sentence: 'findings: Proposed logistics framework'
27
+ sentences:
28
+ - 'purpose: However these may not be the only reasons for undertaking collection
29
+ evaluation.'
30
+ - 'purpose: Clearly, there is variation in the definition and understanding of the
31
+ term sustainability.'
32
+ - 'purpose: The study is based on a panel data regression analysis of 234 SMEs over
33
+ a 10-year period (2004-2013).'
34
+ - source_sentence: 'method: Electoral campaigns and party websites'
35
+ sentences:
36
+ - 'method: Track, leadership style, and team outcomes'
37
+ - 'purpose: , three CKM strategies that organizations use to manage customer knowledge
38
+ are considered.'
39
+ - 'findings: Motherhood directly affects career progression.'
40
+ - source_sentence: 'negative: Entrepreneurship education in Iran'
41
+ sentences:
42
+ - 'negative: Sensemaking as local weather'
43
+ - 'findings: In the next section, we will develop hypotheses to explain retail banner
44
+ divestment timing.'
45
+ - 'negative: Thus, the purpose of this paper is to review AR in retailing within
46
+ business-oriented research.'
47
+ - source_sentence: 'purpose: 2.2 Decentralization and participation'
48
+ sentences:
49
+ - 'purpose: Social norm approach and feedback'
50
+ - 'findings: The upper path of the model represents how counter-knowledge directly
51
+ affects ACAP, reducing HC.'
52
+ - 'purpose: Online strategy building requires a series of steps.'
53
+ pipeline_tag: sentence-similarity
54
+ model-index:
55
+ - name: SentenceTransformer based on sentence-transformers/paraphrase-MiniLM-L12-v2
56
+ results:
57
+ - task:
58
+ type: triplet
59
+ name: Triplet
60
+ dataset:
61
+ name: triplet
62
+ type: triplet
63
+ metrics:
64
+ - type: cosine_accuracy
65
+ value: 0.6998206089274619
66
+ name: Cosine Accuracy
67
+ - type: dot_accuracy
68
+ value: 0.39671483834759774
69
+ name: Dot Accuracy
70
+ - type: manhattan_accuracy
71
+ value: 0.6998506744703453
72
+ name: Manhattan Accuracy
73
+ - type: euclidean_accuracy
74
+ value: 0.7153344290553406
75
+ name: Euclidean Accuracy
76
+ - type: max_accuracy
77
+ value: 0.7153344290553406
78
+ name: Max Accuracy
79
+ ---
80
+
81
+ # SentenceTransformer based on sentence-transformers/paraphrase-MiniLM-L12-v2
82
+
83
+ This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/paraphrase-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-MiniLM-L12-v2). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
84
+
85
+ ## Model Details
86
+
87
+ ### Model Description
88
+ - **Model Type:** Sentence Transformer
89
+ - **Base model:** [sentence-transformers/paraphrase-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-MiniLM-L12-v2) <!-- at revision 3ab2765205fa23269bcc8c8e08ae5b1c35203ab4 -->
90
+ - **Maximum Sequence Length:** 256 tokens
91
+ - **Output Dimensionality:** 384 tokens
92
+ - **Similarity Function:** Cosine Similarity
93
+ <!-- - **Training Dataset:** Unknown -->
94
+ <!-- - **Language:** Unknown -->
95
+ <!-- - **License:** Unknown -->
96
+
97
+ ### Model Sources
98
+
99
+ - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
100
+ - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
101
+ - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
102
+
103
+ ### Full Model Architecture
104
+
105
+ ```
106
+ SentenceTransformer(
107
+ (0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
108
+ (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
109
+ )
110
+ ```
111
+
112
+ ## Usage
113
+
114
+ ### Direct Usage (Sentence Transformers)
115
+
116
+ First install the Sentence Transformers library:
117
+
118
+ ```bash
119
+ pip install -U sentence-transformers
120
+ ```
121
+
122
+ Then you can load this model and run inference.
123
+ ```python
124
+ from sentence_transformers import SentenceTransformer
125
+
126
+ # Download from the 🤗 Hub
127
+ model = SentenceTransformer("gubartz/facet_retriever")
128
+ # Run inference
129
+ sentences = [
130
+ 'purpose: 2.2 Decentralization and participation',
131
+ 'purpose: Social norm approach and feedback',
132
+ 'findings: The upper path of the model represents how counter-knowledge directly affects ACAP, reducing HC.',
133
+ ]
134
+ embeddings = model.encode(sentences)
135
+ print(embeddings.shape)
136
+ # [3, 384]
137
+
138
+ # Get the similarity scores for the embeddings
139
+ similarities = model.similarity(embeddings, embeddings)
140
+ print(similarities.shape)
141
+ # [3, 3]
142
+ ```
143
+
144
+ <!--
145
+ ### Direct Usage (Transformers)
146
+
147
+ <details><summary>Click to see the direct usage in Transformers</summary>
148
+
149
+ </details>
150
+ -->
151
+
152
+ <!--
153
+ ### Downstream Usage (Sentence Transformers)
154
+
155
+ You can finetune this model on your own dataset.
156
+
157
+ <details><summary>Click to expand</summary>
158
+
159
+ </details>
160
+ -->
161
+
162
+ <!--
163
+ ### Out-of-Scope Use
164
+
165
+ *List how the model may foreseeably be misused and address what users ought not to do with the model.*
166
+ -->
167
+
168
+ ## Evaluation
169
+
170
+ ### Metrics
171
+
172
+ #### Triplet
173
+ * Dataset: `triplet`
174
+ * Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator)
175
+
176
+ | Metric | Value |
177
+ |:--------------------|:-----------|
178
+ | **cosine_accuracy** | **0.6998** |
179
+ | dot_accuracy | 0.3967 |
180
+ | manhattan_accuracy | 0.6999 |
181
+ | euclidean_accuracy | 0.7153 |
182
+ | max_accuracy | 0.7153 |
183
+
184
+ <!--
185
+ ## Bias, Risks and Limitations
186
+
187
+ *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
188
+ -->
189
+
190
+ <!--
191
+ ### Recommendations
192
+
193
+ *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
194
+ -->
195
+
196
+ ## Training Details
197
+
198
+ ### Training Dataset
199
+
200
+ #### Unnamed Dataset
201
+
202
+
203
+ * Size: 1,541,116 training samples
204
+ * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
205
+ * Approximate statistics based on the first 1000 samples:
206
+ | | anchor | positive | negative |
207
+ |:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
208
+ | type | string | string | string |
209
+ | details | <ul><li>min: 9 tokens</li><li>mean: 42.16 tokens</li><li>max: 187 tokens</li></ul> | <ul><li>min: 10 tokens</li><li>mean: 42.77 tokens</li><li>max: 183 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 38.65 tokens</li><li>max: 227 tokens</li></ul> |
210
+ * Samples:
211
+ | anchor | positive | negative |
212
+ |:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
213
+ | <code>purpose: study attempts to fill this gap by examining firm-specific capabilities of Turkish outward FDI firms.</code> | <code>purpose: In short, the above-mentioned percentages show the lack of usage of knowledge sharing and collaborative technologies in some research institutions in Malaysia due to perceived causes such as non-availability of technology, lack of support, absent of teamwork culture, and lack of knowledge and training.</code> | <code>purpose: While SMA alone must not be used to gather and analyze these voices, these tools can guide organizations in relating to their publics, increasing the way groups identify with them and motivating these groups to enter into relationships with them.</code> |
214
+ | <code>purpose: In this section of the paper, we try to explain citizen attitudes towards sustainable procurement.</code> | <code>purpose: Different from previous studies to concern key factors for motivating consumers' online buying behavior and behavioral intention (Liang and Lim, 2011; Zhang et al., 2013), such finding add knowledge in the filed by finding the meaningful affective mechanism of consumers in OFGB.</code> | <code>purpose: Task significance is not significantly different among generational cohorts of knowledge workers.</code> |
215
+ | <code>purpose: However, the extensive use of information technology (IT) also comes with related security problems caused by the abstract nature of interacting systems - technical and organizational - and the seemingly lack of or inferior control of data or information.</code> | <code>purpose: No previous research using cluster analysis in nursing homes was found, but clusters identified in this study are lower than in previous hospital-based research into patients experiences and satisfaction used as cluster variables (Grondahl et al., 2011).</code> | <code>purpose: Yet, this engagement has tended to only involve a small section of the overall medical workforce in practice, raising questions about the nature of medical engagement more broadly and the mechanisms needed to enhance these processes.</code> |
216
+ * Loss: [<code>TripletLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#tripletloss) with these parameters:
217
+ ```json
218
+ {
219
+ "distance_metric": "TripletDistanceMetric.EUCLIDEAN",
220
+ "triplet_margin": 5
221
+ }
222
+ ```
223
+
224
+ ### Evaluation Dataset
225
+
226
+ #### Unnamed Dataset
227
+
228
+
229
+ * Size: 199,564 evaluation samples
230
+ * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
231
+ * Approximate statistics based on the first 1000 samples:
232
+ | | anchor | positive | negative |
233
+ |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
234
+ | type | string | string | string |
235
+ | details | <ul><li>min: 9 tokens</li><li>mean: 42.64 tokens</li><li>max: 165 tokens</li></ul> | <ul><li>min: 9 tokens</li><li>mean: 42.42 tokens</li><li>max: 197 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 38.23 tokens</li><li>max: 193 tokens</li></ul> |
236
+ * Samples:
237
+ | anchor | positive | negative |
238
+ |:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
239
+ | <code>purpose: However, it seems obvious that, in the long run, Green OA can be seen as leading progressively to the disappearance of the "traditional" publication model and, possibly, of scientific publishers altogether unless they reconsider their business model and adapt to the new situation.</code> | <code>purpose: Considering the transcendence of the sustainable development agenda in the UDRD, it was decided to search for explicit references to the issue of risk in the proposed indicators, finding a correspondence between four indicators of the development agenda and indicators proposed for the implementation of the Sendai Framework (Maskrey, 2016).</code> | <code>purpose: Finally, the terms of the permanent multinomial corresponding to the particular manufacturing system may be listed and the resulting graphs may be obtained and used for structurally analyzing the capabilities of the manufacturing system in different areas.</code> |
240
+ | <code>purpose: To what extent do information science and the other disciplines demonstrate interest in social network theory and social network analysis?RQ2.</code> | <code>purpose: This study explores relationships between relationship commitment, cooperative behavior and alliance performance from the perspectives of both companies and contract farmers.</code> | <code>purpose: 4.1 The respondents' health literacy skills</code> |
241
+ | <code>purpose: The evidence discussed above shows the nature of forecasting connections in the income growth across the globe.</code> | <code>purpose: Namely, the paper confirms that there is vast deviation between the European countries when it comes to consumer trust in banking in general but also related to each studied banking service.</code> | <code>purpose: Healthcare is one of the major sectors in which Lean production is being considered and adopted as an improvement program (Poksinska, 2010).</code> |
242
+ * Loss: [<code>TripletLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#tripletloss) with these parameters:
243
+ ```json
244
+ {
245
+ "distance_metric": "TripletDistanceMetric.EUCLIDEAN",
246
+ "triplet_margin": 5
247
+ }
248
+ ```
249
+
250
+ ### Training Hyperparameters
251
+ #### Non-Default Hyperparameters
252
+
253
+ - `eval_strategy`: epoch
254
+ - `per_device_train_batch_size`: 64
255
+ - `per_device_eval_batch_size`: 128
256
+ - `gradient_accumulation_steps`: 16
257
+ - `num_train_epochs`: 5
258
+ - `warmup_ratio`: 0.1
259
+ - `fp16`: True
260
+ - `load_best_model_at_end`: True
261
+ - `auto_find_batch_size`: True
262
+
263
+ #### All Hyperparameters
264
+ <details><summary>Click to expand</summary>
265
+
266
+ - `overwrite_output_dir`: False
267
+ - `do_predict`: False
268
+ - `eval_strategy`: epoch
269
+ - `prediction_loss_only`: True
270
+ - `per_device_train_batch_size`: 64
271
+ - `per_device_eval_batch_size`: 128
272
+ - `per_gpu_train_batch_size`: None
273
+ - `per_gpu_eval_batch_size`: None
274
+ - `gradient_accumulation_steps`: 16
275
+ - `eval_accumulation_steps`: None
276
+ - `learning_rate`: 5e-05
277
+ - `weight_decay`: 0.0
278
+ - `adam_beta1`: 0.9
279
+ - `adam_beta2`: 0.999
280
+ - `adam_epsilon`: 1e-08
281
+ - `max_grad_norm`: 1.0
282
+ - `num_train_epochs`: 5
283
+ - `max_steps`: -1
284
+ - `lr_scheduler_type`: linear
285
+ - `lr_scheduler_kwargs`: {}
286
+ - `warmup_ratio`: 0.1
287
+ - `warmup_steps`: 0
288
+ - `log_level`: passive
289
+ - `log_level_replica`: warning
290
+ - `log_on_each_node`: True
291
+ - `logging_nan_inf_filter`: True
292
+ - `save_safetensors`: True
293
+ - `save_on_each_node`: False
294
+ - `save_only_model`: False
295
+ - `restore_callback_states_from_checkpoint`: False
296
+ - `no_cuda`: False
297
+ - `use_cpu`: False
298
+ - `use_mps_device`: False
299
+ - `seed`: 42
300
+ - `data_seed`: None
301
+ - `jit_mode_eval`: False
302
+ - `use_ipex`: False
303
+ - `bf16`: False
304
+ - `fp16`: True
305
+ - `fp16_opt_level`: O1
306
+ - `half_precision_backend`: auto
307
+ - `bf16_full_eval`: False
308
+ - `fp16_full_eval`: False
309
+ - `tf32`: None
310
+ - `local_rank`: 0
311
+ - `ddp_backend`: None
312
+ - `tpu_num_cores`: None
313
+ - `tpu_metrics_debug`: False
314
+ - `debug`: []
315
+ - `dataloader_drop_last`: False
316
+ - `dataloader_num_workers`: 0
317
+ - `dataloader_prefetch_factor`: None
318
+ - `past_index`: -1
319
+ - `disable_tqdm`: False
320
+ - `remove_unused_columns`: True
321
+ - `label_names`: None
322
+ - `load_best_model_at_end`: True
323
+ - `ignore_data_skip`: False
324
+ - `fsdp`: []
325
+ - `fsdp_min_num_params`: 0
326
+ - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
327
+ - `fsdp_transformer_layer_cls_to_wrap`: None
328
+ - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
329
+ - `deepspeed`: None
330
+ - `label_smoothing_factor`: 0.0
331
+ - `optim`: adamw_torch
332
+ - `optim_args`: None
333
+ - `adafactor`: False
334
+ - `group_by_length`: False
335
+ - `length_column_name`: length
336
+ - `ddp_find_unused_parameters`: None
337
+ - `ddp_bucket_cap_mb`: None
338
+ - `ddp_broadcast_buffers`: False
339
+ - `dataloader_pin_memory`: True
340
+ - `dataloader_persistent_workers`: False
341
+ - `skip_memory_metrics`: True
342
+ - `use_legacy_prediction_loop`: False
343
+ - `push_to_hub`: False
344
+ - `resume_from_checkpoint`: None
345
+ - `hub_model_id`: None
346
+ - `hub_strategy`: every_save
347
+ - `hub_private_repo`: False
348
+ - `hub_always_push`: False
349
+ - `gradient_checkpointing`: False
350
+ - `gradient_checkpointing_kwargs`: None
351
+ - `include_inputs_for_metrics`: False
352
+ - `eval_do_concat_batches`: True
353
+ - `fp16_backend`: auto
354
+ - `push_to_hub_model_id`: None
355
+ - `push_to_hub_organization`: None
356
+ - `mp_parameters`:
357
+ - `auto_find_batch_size`: True
358
+ - `full_determinism`: False
359
+ - `torchdynamo`: None
360
+ - `ray_scope`: last
361
+ - `ddp_timeout`: 1800
362
+ - `torch_compile`: False
363
+ - `torch_compile_backend`: None
364
+ - `torch_compile_mode`: None
365
+ - `dispatch_batches`: None
366
+ - `split_batches`: None
367
+ - `include_tokens_per_second`: False
368
+ - `include_num_input_tokens_seen`: False
369
+ - `neftune_noise_alpha`: None
370
+ - `optim_target_modules`: None
371
+ - `batch_eval_metrics`: False
372
+ - `batch_sampler`: batch_sampler
373
+ - `multi_dataset_batch_sampler`: proportional
374
+
375
+ </details>
376
+
377
+ ### Training Logs
378
+ | Epoch | Step | Training Loss | loss | triplet_cosine_accuracy |
379
+ |:-------:|:--------:|:-------------:|:----------:|:-----------------------:|
380
+ | 0.3322 | 500 | 4.2859 | - | - |
381
+ | 0.6645 | 1000 | 3.693 | - | - |
382
+ | 0.9967 | 1500 | 3.5602 | - | - |
383
+ | 1.0 | 1505 | - | 3.4908 | 0.6914 |
384
+ | 1.3289 | 2000 | 3.427 | - | - |
385
+ | 1.6611 | 2500 | 3.3854 | - | - |
386
+ | 1.9934 | 3000 | 3.3551 | - | - |
387
+ | 2.0 | 3010 | - | 3.3604 | 0.7000 |
388
+ | 2.3256 | 3500 | 3.2353 | - | - |
389
+ | 2.6578 | 4000 | 3.221 | - | - |
390
+ | 2.9900 | 4500 | 3.2038 | - | - |
391
+ | **3.0** | **4515** | **-** | **3.3203** | **0.7026** |
392
+ | 3.3223 | 5000 | 3.1019 | - | - |
393
+ | 3.6545 | 5500 | 3.0942 | - | - |
394
+ | 3.9867 | 6000 | 3.085 | - | - |
395
+ | 4.0 | 6020 | - | 3.3177 | 0.7014 |
396
+ | 4.3189 | 6500 | 3.0129 | - | - |
397
+ | 4.6512 | 7000 | 3.0083 | - | - |
398
+ | 4.9834 | 7500 | 2.9971 | - | - |
399
+ | 5.0 | 7525 | - | 3.3264 | 0.6998 |
400
+
401
+ * The bold row denotes the saved checkpoint.
402
+
403
+ ### Framework Versions
404
+ - Python: 3.10.12
405
+ - Sentence Transformers: 3.0.0
406
+ - Transformers: 4.41.0
407
+ - PyTorch: 2.3.0+cu121
408
+ - Accelerate: 0.30.1
409
+ - Datasets: 2.19.1
410
+ - Tokenizers: 0.19.1
411
+
412
+ ## Citation
413
+
414
+ ### BibTeX
415
+
416
+ #### Sentence Transformers
417
+ ```bibtex
418
+ @inproceedings{reimers-2019-sentence-bert,
419
+ title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
420
+ author = "Reimers, Nils and Gurevych, Iryna",
421
+ booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
422
+ month = "11",
423
+ year = "2019",
424
+ publisher = "Association for Computational Linguistics",
425
+ url = "https://arxiv.org/abs/1908.10084",
426
+ }
427
+ ```
428
+
429
+ #### TripletLoss
430
+ ```bibtex
431
+ @misc{hermans2017defense,
432
+ title={In Defense of the Triplet Loss for Person Re-Identification},
433
+ author={Alexander Hermans and Lucas Beyer and Bastian Leibe},
434
+ year={2017},
435
+ eprint={1703.07737},
436
+ archivePrefix={arXiv},
437
+ primaryClass={cs.CV}
438
+ }
439
+ ```
440
+
441
+ <!--
442
+ ## Glossary
443
+
444
+ *Clearly define terms in order to be accessible across audiences.*
445
+ -->
446
+
447
+ <!--
448
+ ## Model Card Authors
449
+
450
+ *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
451
+ -->
452
+
453
+ <!--
454
+ ## Model Card Contact
455
+
456
+ *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
457
+ -->
config.json ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "/media/sf_shared/paraphrase-MiniLM-L12-v2-triplet/final",
3
+ "architectures": [
4
+ "BertModel"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "classifier_dropout": null,
8
+ "gradient_checkpointing": false,
9
+ "hidden_act": "gelu",
10
+ "hidden_dropout_prob": 0.1,
11
+ "hidden_size": 384,
12
+ "initializer_range": 0.02,
13
+ "intermediate_size": 1536,
14
+ "layer_norm_eps": 1e-12,
15
+ "max_position_embeddings": 512,
16
+ "model_type": "bert",
17
+ "num_attention_heads": 12,
18
+ "num_hidden_layers": 12,
19
+ "pad_token_id": 0,
20
+ "position_embedding_type": "absolute",
21
+ "torch_dtype": "float32",
22
+ "transformers_version": "4.40.0",
23
+ "type_vocab_size": 2,
24
+ "use_cache": true,
25
+ "vocab_size": 30522
26
+ }
config_sentence_transformers.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "__version__": {
3
+ "sentence_transformers": "3.0.1",
4
+ "transformers": "4.40.0",
5
+ "pytorch": "2.0.1+cu117"
6
+ },
7
+ "prompts": {},
8
+ "default_prompt_name": null,
9
+ "similarity_fn_name": null
10
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f1b5b17fd5e55280dccd18ade10c7e21aadd804eec417edd0c13314e3496745f
3
+ size 133462128
modules.json ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.models.Transformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_Pooling",
12
+ "type": "sentence_transformers.models.Pooling"
13
+ }
14
+ ]
sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 256,
3
+ "do_lower_case": false
4
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": {
3
+ "content": "[CLS]",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "mask_token": {
10
+ "content": "[MASK]",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": {
17
+ "content": "[PAD]",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "sep_token": {
24
+ "content": "[SEP]",
25
+ "lstrip": false,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ },
30
+ "unk_token": {
31
+ "content": "[UNK]",
32
+ "lstrip": false,
33
+ "normalized": false,
34
+ "rstrip": false,
35
+ "single_word": false
36
+ }
37
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,64 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "[PAD]",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "100": {
12
+ "content": "[UNK]",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "101": {
20
+ "content": "[CLS]",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "102": {
28
+ "content": "[SEP]",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "103": {
36
+ "content": "[MASK]",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ }
43
+ },
44
+ "clean_up_tokenization_spaces": true,
45
+ "cls_token": "[CLS]",
46
+ "do_basic_tokenize": true,
47
+ "do_lower_case": true,
48
+ "mask_token": "[MASK]",
49
+ "max_length": 256,
50
+ "model_max_length": 256,
51
+ "never_split": null,
52
+ "pad_to_multiple_of": null,
53
+ "pad_token": "[PAD]",
54
+ "pad_token_type_id": 0,
55
+ "padding_side": "right",
56
+ "sep_token": "[SEP]",
57
+ "stride": 0,
58
+ "strip_accents": null,
59
+ "tokenize_chinese_chars": true,
60
+ "tokenizer_class": "BertTokenizer",
61
+ "truncation_side": "right",
62
+ "truncation_strategy": "longest_first",
63
+ "unk_token": "[UNK]"
64
+ }
vocab.txt ADDED
The diff for this file is too large to render. See raw diff