dawn78 commited on
Commit
2419b77
·
verified ·
1 Parent(s): 498a3ab

Upload folder using huggingface_hub

Browse files
1_Pooling/config.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "word_embedding_dimension": 384,
3
+ "pooling_mode_cls_token": false,
4
+ "pooling_mode_mean_tokens": true,
5
+ "pooling_mode_max_tokens": false,
6
+ "pooling_mode_mean_sqrt_len_tokens": false,
7
+ "pooling_mode_weightedmean_tokens": false,
8
+ "pooling_mode_lasttoken": false,
9
+ "include_prompt": true
10
+ }
README.md ADDED
@@ -0,0 +1,443 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - sentence-transformers
4
+ - sentence-similarity
5
+ - feature-extraction
6
+ - generated_from_trainer
7
+ - dataset_size:1459
8
+ - loss:CosineSimilarityLoss
9
+ base_model: sentence-transformers/all-MiniLM-L6-v2
10
+ widget:
11
+ - source_sentence: still popular today this fresh fougere fragrance inspired many
12
+ wannabes
13
+ sentences:
14
+ - pear, blackberry, herbal notes, bamboo, clove, apple, guarana, green tree accord
15
+ - mace, hyrax, camellia, tea, akigalawood
16
+ - mandarin, lavender, green botanics, jasmine, basil, geranium, sage, sandalwood,
17
+ vetiver, rosewood, amber
18
+ - source_sentence: little black dress eau fraiche by avon exudes a lively and refreshing
19
+ spirit that captivates effortlessly this fragrance opens with a bright burst of
20
+ citrus that instantly uplifts the mood reminiscent of sunkissed afternoons as
21
+ it unfolds delicate floral notes weave through creating an elegant bouquet that
22
+ embodies femininity and charm the scent is anchored by a subtle musk that rounds
23
+ out the experience providing a warm and inviting backdrop users have praised this
24
+ fragrance for its fresh and invigorating essence making it perfect for daytime
25
+ wear many appreciate its lightness and airy quality which is ideal for those seeking
26
+ a scent that is both playful and sophisticated with a commendable rating of 375
27
+ out of 5 it has earned accolades for its delightful character and versatility
28
+ appealing to a broad audience who value a fragrance that feels both chic and approachable
29
+ overall little black dress eau fraiche is described as an essential contemporary
30
+ scent for the modern woman effortlessly enhancing any occasion with its vibrant
31
+ charm
32
+ sentences:
33
+ - cress, lantana, castoreum, parma violet, cotton flower, oud, hesperidic notes,
34
+ grape, olive tree, hyacinth, earthy notes, carambola, osmanthus, champaca, cypriol,
35
+ lemon blossom, rosewood
36
+ - yuzu, clary sage, balsam fir, cedar
37
+ - passionflower, red currant, rosehip, almond blossom, chocolate
38
+ - source_sentence: rose blush cologne 2023 by jo malone london rose blush cologne
39
+ presents an enchanting bouquet that captures the essence of blooming romance and
40
+ tropical vitality with an initial sweet hint of luscious litchi and a refreshing
41
+ touch of herbs this fragrance unfolds into a heart of delicate rose showcasing
42
+ a radiant femininity the composition is beautifully rounded off with soft musky
43
+ undertones adding an elegant warmth that lingers on the skin users describe rose
44
+ blush as vibrant and joyful perfect for both everyday wear and special occasions
45
+ reviewers appreciate its fresh appeal heralding it as an uplifting scent that
46
+ evokes feelings of spring and renewal many highlight its moderate longevity making
47
+ it suitable for those who desire a fragrance that gently permeates without overwhelming
48
+ whether youre seeking a burst of floral energy or a subtle whisper of sophistication
49
+ this perfume is sure to leave a delightful impression
50
+ sentences:
51
+ - honey, mahogany
52
+ - lychee, basil, rose, musk
53
+ - lemon, may rose, spices, peony, lily of the valley, blackcurrant, raspberry, peach,
54
+ musk, sandalwood, amber, heliotrope, oud
55
+ - source_sentence: thank u next by ariana grande is a playful and modern fragrance
56
+ that captures the essence of youthful exuberance and selfempowerment this charming
57
+ scent exudes a vibrant sweetness that dances between fruity and creamy notes creating
58
+ an inviting aura that is both uplifting and comforting users often describe this
59
+ perfume as deliciously sweet and fun making it perfect for casual wear or a spirited
60
+ night out the blend is frequently noted for its warm inviting quality evoking
61
+ a sense of cheerful nostalgia many reviewers highlight its longlasting nature
62
+ and delightful sillage ensuring that its fragrant embrace stays with you throughout
63
+ the day perfect for the confident contemporary woman thank u next effortlessly
64
+ combines the spirited essence of fresh berries with a creamy tropical nuance which
65
+ is masterfully balanced by an undercurrent of sweet indulgence overall this fragrance
66
+ is celebrated for its delightful charm and is sure to make a memorable impression
67
+ wherever you go
68
+ sentences:
69
+ - cabreuva, mate, bamboo leaf, black cardamom, orris root, camellia, oriental notes,
70
+ hibiscus, lily of the valley, lantana, wood notes
71
+ - sea salt, amberwood, marine notes, resins, clary sage, labdanum, white musk, blonde
72
+ woods
73
+ - nectarine, olive tree, grass, cress, clementine, red apple
74
+ - source_sentence: zara night eau de parfum envelops you in a captivating blend of
75
+ softness and elegance creating a rich floral experience that feels both fresh
76
+ and inviting this fragrance exudes a charming femininity where luscious floral
77
+ notes mingle seamlessly with a warm creamy essence that evokes a sense of comfort
78
+ users describe it as enchanting and seductive perfect for evening wear or special
79
+ occasions the scent captures the essence of a night blooming with possibilities
80
+ balancing the vibrancy of fresh petals with the alluring depth of sweet undertones
81
+ reviewers appreciate its ability to linger gracefully on the skin leaving a trail
82
+ of sophisticated allure without being overwhelming many find it to be a delightful
83
+ choice for those seeking a fragrance that is both versatile and memorable with
84
+ a touch of playfulness that hints at a romantic allure with a commendable rating
85
+ zara night is celebrated for its accessibility and charm making it a favored addition
86
+ to any perfume collection
87
+ sentences:
88
+ - whiskey, bellini, cognac, blackberry, juniper berry, iris root, aldehydes, red
89
+ currant, flint, cumin, mango, sea salt, sea notes, birch, bitter orange, marine
90
+ notes, grapefruit blossom, hawthorn, yuzu, clementine, cream, pineapple
91
+ - moss, sandalwood, mangosteen, cade oil
92
+ - bergamot, galbanum, petitgrain, jasmine, narcissus, violet, carnation, rose, spices,
93
+ blonde woods, iris, vanilla, amber
94
+ pipeline_tag: sentence-similarity
95
+ library_name: sentence-transformers
96
+ metrics:
97
+ - pearson_cosine
98
+ - spearman_cosine
99
+ model-index:
100
+ - name: SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2
101
+ results:
102
+ - task:
103
+ type: semantic-similarity
104
+ name: Semantic Similarity
105
+ dataset:
106
+ name: Unknown
107
+ type: unknown
108
+ metrics:
109
+ - type: pearson_cosine
110
+ value: 0.8425746761744255
111
+ name: Pearson Cosine
112
+ - type: spearman_cosine
113
+ value: 0.718974393548417
114
+ name: Spearman Cosine
115
+ ---
116
+
117
+ # SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2
118
+
119
+ This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
120
+
121
+ ## Model Details
122
+
123
+ ### Model Description
124
+ - **Model Type:** Sentence Transformer
125
+ - **Base model:** [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) <!-- at revision fa97f6e7cb1a59073dff9e6b13e2715cf7475ac9 -->
126
+ - **Maximum Sequence Length:** 256 tokens
127
+ - **Output Dimensionality:** 384 dimensions
128
+ - **Similarity Function:** Cosine Similarity
129
+ <!-- - **Training Dataset:** Unknown -->
130
+ <!-- - **Language:** Unknown -->
131
+ <!-- - **License:** Unknown -->
132
+
133
+ ### Model Sources
134
+
135
+ - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
136
+ - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
137
+ - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
138
+
139
+ ### Full Model Architecture
140
+
141
+ ```
142
+ SentenceTransformer(
143
+ (0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
144
+ (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
145
+ (2): Normalize()
146
+ )
147
+ ```
148
+
149
+ ## Usage
150
+
151
+ ### Direct Usage (Sentence Transformers)
152
+
153
+ First install the Sentence Transformers library:
154
+
155
+ ```bash
156
+ pip install -U sentence-transformers
157
+ ```
158
+
159
+ Then you can load this model and run inference.
160
+ ```python
161
+ from sentence_transformers import SentenceTransformer
162
+
163
+ # Download from the 🤗 Hub
164
+ model = SentenceTransformer("sentence_transformers_model_id")
165
+ # Run inference
166
+ sentences = [
167
+ 'zara night eau de parfum envelops you in a captivating blend of softness and elegance creating a rich floral experience that feels both fresh and inviting this fragrance exudes a charming femininity where luscious floral notes mingle seamlessly with a warm creamy essence that evokes a sense of comfort users describe it as enchanting and seductive perfect for evening wear or special occasions the scent captures the essence of a night blooming with possibilities balancing the vibrancy of fresh petals with the alluring depth of sweet undertones reviewers appreciate its ability to linger gracefully on the skin leaving a trail of sophisticated allure without being overwhelming many find it to be a delightful choice for those seeking a fragrance that is both versatile and memorable with a touch of playfulness that hints at a romantic allure with a commendable rating zara night is celebrated for its accessibility and charm making it a favored addition to any perfume collection',
168
+ 'moss, sandalwood, mangosteen, cade oil',
169
+ 'whiskey, bellini, cognac, blackberry, juniper berry, iris root, aldehydes, red currant, flint, cumin, mango, sea salt, sea notes, birch, bitter orange, marine notes, grapefruit blossom, hawthorn, yuzu, clementine, cream, pineapple',
170
+ ]
171
+ embeddings = model.encode(sentences)
172
+ print(embeddings.shape)
173
+ # [3, 384]
174
+
175
+ # Get the similarity scores for the embeddings
176
+ similarities = model.similarity(embeddings, embeddings)
177
+ print(similarities.shape)
178
+ # [3, 3]
179
+ ```
180
+
181
+ <!--
182
+ ### Direct Usage (Transformers)
183
+
184
+ <details><summary>Click to see the direct usage in Transformers</summary>
185
+
186
+ </details>
187
+ -->
188
+
189
+ <!--
190
+ ### Downstream Usage (Sentence Transformers)
191
+
192
+ You can finetune this model on your own dataset.
193
+
194
+ <details><summary>Click to expand</summary>
195
+
196
+ </details>
197
+ -->
198
+
199
+ <!--
200
+ ### Out-of-Scope Use
201
+
202
+ *List how the model may foreseeably be misused and address what users ought not to do with the model.*
203
+ -->
204
+
205
+ ## Evaluation
206
+
207
+ ### Metrics
208
+
209
+ #### Semantic Similarity
210
+
211
+ * Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
212
+
213
+ | Metric | Value |
214
+ |:--------------------|:----------|
215
+ | pearson_cosine | 0.8426 |
216
+ | **spearman_cosine** | **0.719** |
217
+
218
+ <!--
219
+ ## Bias, Risks and Limitations
220
+
221
+ *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
222
+ -->
223
+
224
+ <!--
225
+ ### Recommendations
226
+
227
+ *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
228
+ -->
229
+
230
+ ## Training Details
231
+
232
+ ### Training Dataset
233
+
234
+ #### Unnamed Dataset
235
+
236
+
237
+ * Size: 1,459 training samples
238
+ * Columns: <code>sentence_0</code>, <code>sentence_1</code>, and <code>label</code>
239
+ * Approximate statistics based on the first 1000 samples:
240
+ | | sentence_0 | sentence_1 | label |
241
+ |:--------|:-------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:---------------------------------------------------------------|
242
+ | type | string | string | float |
243
+ | details | <ul><li>min: 12 tokens</li><li>mean: 182.01 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 33.83 tokens</li><li>max: 101 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.25</li><li>max: 1.0</li></ul> |
244
+ * Samples:
245
+ | sentence_0 | sentence_1 | label |
246
+ |:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------|:-----------------|
247
+ | <code>today tomorrow always in love by avon embodying a sense of timeless romance today tomorrow always in love is an enchanting fragrance that strikes a perfect balance between freshness and warmth this captivating scent opens with bright effervescent notes that evoke images of blooming gardens and sunlit moments as the fragrance unfolds it reveals a charming bouquet that celebrates femininity featuring delicate floral elements that wrap around the wearer like a cherished embrace users describe this perfume as uplifting and evocative making it an ideal companion for both everyday wear and special occasions many reviewers appreciate its elegant character highlighting its multifaceted nature that seamlessly transitions from day to night while some find it subtly sweet and playful others cherish its musky undertones which lend a depth that enhances its allure overall with a moderate rating that suggests a solid appreciation among wearers today tomorrow always in love captures the essence of ro...</code> | <code>lotus, neroli, carambola, pomegranate, tuberose, gardenia, tuberose, pepper, musk, woody notes, amber</code> | <code>1.0</code> |
248
+ | <code>mankind hero by kenneth cole encapsulates a vibrant and adventurous spirit designed for the modern man who embraces both freshness and sophistication this fragrance unfolds with an invigorating burst reminiscent of a brisk mountain breeze seamlessly paired with a zesty hint of citrus the aromatic heart introduces a soothing edginess where lavender and warm vanilla intertwine creating a balanced yet captivating profile as it settles an inviting warmth emerges enriched by woody undertones that linger pleasantly on the skin users have praised mankind hero for its versatile character suitable for both casual outings and formal occasions many describe it as longlasting and unique appreciating the balanced blend that feels both refreshing and comforting the overall sentiment reflects a sense of confidence and elegance making this scent a cherished addition to a mans fragrance collection it has garnered favorable reviews boasting a solid rating that underscores its appeal embrace the essence ...</code> | <code>mountain air, lemon, coriander, lavender, vanilla, clary sage, plum, musk, coumarin, amberwood, oak moss</code> | <code>1.0</code> |
249
+ | <code>black essential dark by avon immerse yourself in the captivating allure of black essential dark a fragrance that elegantly marries the depth of aromatic woods with a touch of leathers sensuality this modern scent envelops the wearer in a rich and sophisticated aura exuding confidence and a hint of mystery users describe it as both refreshing and spicy with an invigorating blend that feels perfect for the urban man who embraces lifes more daring adventures crafted with meticulous attention by perfumer mike parrot this fragrance has garnered a solid reputation amongst enthusiasts resulting in a commendable 405 rating from its admirers many find it to be versatile enough for both day and night wear making it an essential companion for various occasions reviewers frequently highlight its longlasting presence creating an inviting and memorable impression with a delicate yet commanding presence black essential dark is ideal for those looking to leave a mark without overpowering the senses wh...</code> | <code>mint, bay leaf, cedar needle, passionflower, black cardamom, flint, rice, teak wood, cedar leaf</code> | <code>0.0</code> |
250
+ * Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters:
251
+ ```json
252
+ {
253
+ "loss_fct": "torch.nn.modules.loss.MSELoss"
254
+ }
255
+ ```
256
+
257
+ ### Training Hyperparameters
258
+ #### Non-Default Hyperparameters
259
+
260
+ - `eval_strategy`: steps
261
+ - `per_device_train_batch_size`: 32
262
+ - `per_device_eval_batch_size`: 32
263
+ - `num_train_epochs`: 5
264
+ - `multi_dataset_batch_sampler`: round_robin
265
+
266
+ #### All Hyperparameters
267
+ <details><summary>Click to expand</summary>
268
+
269
+ - `overwrite_output_dir`: False
270
+ - `do_predict`: False
271
+ - `eval_strategy`: steps
272
+ - `prediction_loss_only`: True
273
+ - `per_device_train_batch_size`: 32
274
+ - `per_device_eval_batch_size`: 32
275
+ - `per_gpu_train_batch_size`: None
276
+ - `per_gpu_eval_batch_size`: None
277
+ - `gradient_accumulation_steps`: 1
278
+ - `eval_accumulation_steps`: None
279
+ - `torch_empty_cache_steps`: None
280
+ - `learning_rate`: 5e-05
281
+ - `weight_decay`: 0.0
282
+ - `adam_beta1`: 0.9
283
+ - `adam_beta2`: 0.999
284
+ - `adam_epsilon`: 1e-08
285
+ - `max_grad_norm`: 1
286
+ - `num_train_epochs`: 5
287
+ - `max_steps`: -1
288
+ - `lr_scheduler_type`: linear
289
+ - `lr_scheduler_kwargs`: {}
290
+ - `warmup_ratio`: 0.0
291
+ - `warmup_steps`: 0
292
+ - `log_level`: passive
293
+ - `log_level_replica`: warning
294
+ - `log_on_each_node`: True
295
+ - `logging_nan_inf_filter`: True
296
+ - `save_safetensors`: True
297
+ - `save_on_each_node`: False
298
+ - `save_only_model`: False
299
+ - `restore_callback_states_from_checkpoint`: False
300
+ - `no_cuda`: False
301
+ - `use_cpu`: False
302
+ - `use_mps_device`: False
303
+ - `seed`: 42
304
+ - `data_seed`: None
305
+ - `jit_mode_eval`: False
306
+ - `use_ipex`: False
307
+ - `bf16`: False
308
+ - `fp16`: False
309
+ - `fp16_opt_level`: O1
310
+ - `half_precision_backend`: auto
311
+ - `bf16_full_eval`: False
312
+ - `fp16_full_eval`: False
313
+ - `tf32`: None
314
+ - `local_rank`: 0
315
+ - `ddp_backend`: None
316
+ - `tpu_num_cores`: None
317
+ - `tpu_metrics_debug`: False
318
+ - `debug`: []
319
+ - `dataloader_drop_last`: False
320
+ - `dataloader_num_workers`: 0
321
+ - `dataloader_prefetch_factor`: None
322
+ - `past_index`: -1
323
+ - `disable_tqdm`: False
324
+ - `remove_unused_columns`: True
325
+ - `label_names`: None
326
+ - `load_best_model_at_end`: False
327
+ - `ignore_data_skip`: False
328
+ - `fsdp`: []
329
+ - `fsdp_min_num_params`: 0
330
+ - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
331
+ - `fsdp_transformer_layer_cls_to_wrap`: None
332
+ - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
333
+ - `deepspeed`: None
334
+ - `label_smoothing_factor`: 0.0
335
+ - `optim`: adamw_torch
336
+ - `optim_args`: None
337
+ - `adafactor`: False
338
+ - `group_by_length`: False
339
+ - `length_column_name`: length
340
+ - `ddp_find_unused_parameters`: None
341
+ - `ddp_bucket_cap_mb`: None
342
+ - `ddp_broadcast_buffers`: False
343
+ - `dataloader_pin_memory`: True
344
+ - `dataloader_persistent_workers`: False
345
+ - `skip_memory_metrics`: True
346
+ - `use_legacy_prediction_loop`: False
347
+ - `push_to_hub`: False
348
+ - `resume_from_checkpoint`: None
349
+ - `hub_model_id`: None
350
+ - `hub_strategy`: every_save
351
+ - `hub_private_repo`: None
352
+ - `hub_always_push`: False
353
+ - `gradient_checkpointing`: False
354
+ - `gradient_checkpointing_kwargs`: None
355
+ - `include_inputs_for_metrics`: False
356
+ - `include_for_metrics`: []
357
+ - `eval_do_concat_batches`: True
358
+ - `fp16_backend`: auto
359
+ - `push_to_hub_model_id`: None
360
+ - `push_to_hub_organization`: None
361
+ - `mp_parameters`:
362
+ - `auto_find_batch_size`: False
363
+ - `full_determinism`: False
364
+ - `torchdynamo`: None
365
+ - `ray_scope`: last
366
+ - `ddp_timeout`: 1800
367
+ - `torch_compile`: False
368
+ - `torch_compile_backend`: None
369
+ - `torch_compile_mode`: None
370
+ - `dispatch_batches`: None
371
+ - `split_batches`: None
372
+ - `include_tokens_per_second`: False
373
+ - `include_num_input_tokens_seen`: False
374
+ - `neftune_noise_alpha`: None
375
+ - `optim_target_modules`: None
376
+ - `batch_eval_metrics`: False
377
+ - `eval_on_start`: False
378
+ - `use_liger_kernel`: False
379
+ - `eval_use_gather_object`: False
380
+ - `average_tokens_across_devices`: False
381
+ - `prompts`: None
382
+ - `batch_sampler`: batch_sampler
383
+ - `multi_dataset_batch_sampler`: round_robin
384
+
385
+ </details>
386
+
387
+ ### Training Logs
388
+ | Epoch | Step | spearman_cosine |
389
+ |:------:|:----:|:---------------:|
390
+ | 1.0 | 46 | 0.5799 |
391
+ | 1.0870 | 50 | 0.6061 |
392
+ | 2.0 | 92 | 0.6940 |
393
+ | 2.1739 | 100 | 0.6940 |
394
+ | 3.0 | 138 | 0.7072 |
395
+ | 3.2609 | 150 | 0.7124 |
396
+ | 4.0 | 184 | 0.7150 |
397
+ | 4.3478 | 200 | 0.7177 |
398
+ | 5.0 | 230 | 0.7190 |
399
+
400
+
401
+ ### Framework Versions
402
+ - Python: 3.11.11
403
+ - Sentence Transformers: 3.3.1
404
+ - Transformers: 4.47.1
405
+ - PyTorch: 2.5.1+cu124
406
+ - Accelerate: 1.2.1
407
+ - Datasets: 3.2.0
408
+ - Tokenizers: 0.21.0
409
+
410
+ ## Citation
411
+
412
+ ### BibTeX
413
+
414
+ #### Sentence Transformers
415
+ ```bibtex
416
+ @inproceedings{reimers-2019-sentence-bert,
417
+ title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
418
+ author = "Reimers, Nils and Gurevych, Iryna",
419
+ booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
420
+ month = "11",
421
+ year = "2019",
422
+ publisher = "Association for Computational Linguistics",
423
+ url = "https://arxiv.org/abs/1908.10084",
424
+ }
425
+ ```
426
+
427
+ <!--
428
+ ## Glossary
429
+
430
+ *Clearly define terms in order to be accessible across audiences.*
431
+ -->
432
+
433
+ <!--
434
+ ## Model Card Authors
435
+
436
+ *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
437
+ -->
438
+
439
+ <!--
440
+ ## Model Card Contact
441
+
442
+ *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
443
+ -->
config.json ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "sentence-transformers/all-MiniLM-L6-v2",
3
+ "architectures": [
4
+ "BertModel"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "classifier_dropout": null,
8
+ "gradient_checkpointing": false,
9
+ "hidden_act": "gelu",
10
+ "hidden_dropout_prob": 0.1,
11
+ "hidden_size": 384,
12
+ "initializer_range": 0.02,
13
+ "intermediate_size": 1536,
14
+ "layer_norm_eps": 1e-12,
15
+ "max_position_embeddings": 512,
16
+ "model_type": "bert",
17
+ "num_attention_heads": 12,
18
+ "num_hidden_layers": 6,
19
+ "pad_token_id": 0,
20
+ "position_embedding_type": "absolute",
21
+ "torch_dtype": "float32",
22
+ "transformers_version": "4.47.1",
23
+ "type_vocab_size": 2,
24
+ "use_cache": true,
25
+ "vocab_size": 30522
26
+ }
config_sentence_transformers.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "__version__": {
3
+ "sentence_transformers": "3.3.1",
4
+ "transformers": "4.47.1",
5
+ "pytorch": "2.5.1+cu124"
6
+ },
7
+ "prompts": {},
8
+ "default_prompt_name": null,
9
+ "similarity_fn_name": "cosine"
10
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ac37e8921eff6ac06bff6101cdd804f632560e05a625019c710180555e152f4a
3
+ size 90864192
modules.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.models.Transformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_Pooling",
12
+ "type": "sentence_transformers.models.Pooling"
13
+ },
14
+ {
15
+ "idx": 2,
16
+ "name": "2",
17
+ "path": "2_Normalize",
18
+ "type": "sentence_transformers.models.Normalize"
19
+ }
20
+ ]
sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 256,
3
+ "do_lower_case": false
4
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": {
3
+ "content": "[CLS]",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "mask_token": {
10
+ "content": "[MASK]",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": {
17
+ "content": "[PAD]",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "sep_token": {
24
+ "content": "[SEP]",
25
+ "lstrip": false,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ },
30
+ "unk_token": {
31
+ "content": "[UNK]",
32
+ "lstrip": false,
33
+ "normalized": false,
34
+ "rstrip": false,
35
+ "single_word": false
36
+ }
37
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,65 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "[PAD]",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "100": {
12
+ "content": "[UNK]",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "101": {
20
+ "content": "[CLS]",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "102": {
28
+ "content": "[SEP]",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "103": {
36
+ "content": "[MASK]",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ }
43
+ },
44
+ "clean_up_tokenization_spaces": false,
45
+ "cls_token": "[CLS]",
46
+ "do_basic_tokenize": true,
47
+ "do_lower_case": true,
48
+ "extra_special_tokens": {},
49
+ "mask_token": "[MASK]",
50
+ "max_length": 128,
51
+ "model_max_length": 256,
52
+ "never_split": null,
53
+ "pad_to_multiple_of": null,
54
+ "pad_token": "[PAD]",
55
+ "pad_token_type_id": 0,
56
+ "padding_side": "right",
57
+ "sep_token": "[SEP]",
58
+ "stride": 0,
59
+ "strip_accents": null,
60
+ "tokenize_chinese_chars": true,
61
+ "tokenizer_class": "BertTokenizer",
62
+ "truncation_side": "right",
63
+ "truncation_strategy": "longest_first",
64
+ "unk_token": "[UNK]"
65
+ }
vocab.txt ADDED
The diff for this file is too large to render. See raw diff