tomaarsen HF staff commited on
Commit
ef21bc5
1 Parent(s): e9c1132

Add new SentenceTransformer model.

Browse files
0_BoW/config.json ADDED
The diff for this file is too large to render. See raw diff
 
1_Dense/config.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"in_features": 25000, "out_features": 768, "bias": true, "activation_function": "torch.nn.modules.activation.Tanh"}
1_Dense/pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6b5221b90f7e4cf5cee24727ef1c7a967498b8eb0dc6fc51832a2946e15c905b
3
+ size 76804732
2_Dense/config.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"in_features": 768, "out_features": 512, "bias": true, "activation_function": "torch.nn.modules.activation.Tanh"}
2_Dense/pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:32f83fbf38ecf121f96d0108fe17b7a07d911c9401e86f573c588c1c0bd2103b
3
+ size 1576572
README.md ADDED
@@ -0,0 +1,511 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ library_name: sentence-transformers
5
+ tags:
6
+ - sentence-transformers
7
+ - sentence-similarity
8
+ - feature-extraction
9
+ - loss:CosineSimilarityLoss
10
+ metrics:
11
+ - pearson_cosine
12
+ - spearman_cosine
13
+ - pearson_manhattan
14
+ - spearman_manhattan
15
+ - pearson_euclidean
16
+ - spearman_euclidean
17
+ - pearson_dot
18
+ - spearman_dot
19
+ - pearson_max
20
+ - spearman_max
21
+ widget:
22
+ - source_sentence: A man is spitting.
23
+ sentences:
24
+ - A man seasoning quail.
25
+ - A brown horse in a green field.
26
+ - A woman is playing the guitar.
27
+ - source_sentence: A woman is reading.
28
+ sentences:
29
+ - A woman is slicing carrot.
30
+ - The man is hiking in the woods.
31
+ - A man is singing and playing a guitar.
32
+ - source_sentence: A woman is dancing.
33
+ sentences:
34
+ - A woman is dancing in railway station.
35
+ - A doctor prescribes a medicine.
36
+ - The man is riding a horse.
37
+ - source_sentence: Women are running.
38
+ sentences:
39
+ - Women are running.
40
+ - A woman is applying eye shadow.
41
+ - A woman and man are riding in a car.
42
+ - source_sentence: A cat is on a robot.
43
+ sentences:
44
+ - A cat is pouncing on a trampoline.
45
+ - A woman is applying eye shadow.
46
+ - A woman and man are riding in a car.
47
+ pipeline_tag: sentence-similarity
48
+ co2_eq_emissions:
49
+ emissions: 0.11798947049821952
50
+ energy_consumed: 0.0003035473717609365
51
+ source: codecarbon
52
+ training_type: fine-tuning
53
+ on_cloud: false
54
+ cpu_model: 13th Gen Intel(R) Core(TM) i7-13700K
55
+ ram_total_size: 31.777088165283203
56
+ hours_used: 0.002
57
+ hardware_used: 1 x NVIDIA GeForce RTX 3090
58
+ model-index:
59
+ - name: SentenceTransformer
60
+ results:
61
+ - task:
62
+ type: semantic-similarity
63
+ name: Semantic Similarity
64
+ dataset:
65
+ name: sts dev
66
+ type: sts-dev
67
+ metrics:
68
+ - type: pearson_cosine
69
+ value: 0.7327950331192871
70
+ name: Pearson Cosine
71
+ - type: spearman_cosine
72
+ value: 0.733720742976967
73
+ name: Spearman Cosine
74
+ - type: pearson_manhattan
75
+ value: 0.5141829243804352
76
+ name: Pearson Manhattan
77
+ - type: spearman_manhattan
78
+ value: 0.5088476055041519
79
+ name: Spearman Manhattan
80
+ - type: pearson_euclidean
81
+ value: 0.5143122485153392
82
+ name: Pearson Euclidean
83
+ - type: spearman_euclidean
84
+ value: 0.5094438567737941
85
+ name: Spearman Euclidean
86
+ - type: pearson_dot
87
+ value: 0.5691313208318369
88
+ name: Pearson Dot
89
+ - type: spearman_dot
90
+ value: 0.6686075432867175
91
+ name: Spearman Dot
92
+ - type: pearson_max
93
+ value: 0.7327950331192871
94
+ name: Pearson Max
95
+ - type: spearman_max
96
+ value: 0.733720742976967
97
+ name: Spearman Max
98
+ - task:
99
+ type: semantic-similarity
100
+ name: Semantic Similarity
101
+ dataset:
102
+ name: sts test
103
+ type: sts-test
104
+ metrics:
105
+ - type: pearson_cosine
106
+ value: 0.6515536111902664
107
+ name: Pearson Cosine
108
+ - type: spearman_cosine
109
+ value: 0.6357551120651417
110
+ name: Spearman Cosine
111
+ - type: pearson_manhattan
112
+ value: 0.4104283118123022
113
+ name: Pearson Manhattan
114
+ - type: spearman_manhattan
115
+ value: 0.4057805136887886
116
+ name: Spearman Manhattan
117
+ - type: pearson_euclidean
118
+ value: 0.4116066558734167
119
+ name: Pearson Euclidean
120
+ - type: spearman_euclidean
121
+ value: 0.40663312273612934
122
+ name: Spearman Euclidean
123
+ - type: pearson_dot
124
+ value: 0.4717437134789646
125
+ name: Pearson Dot
126
+ - type: spearman_dot
127
+ value: 0.5536656048436931
128
+ name: Spearman Dot
129
+ - type: pearson_max
130
+ value: 0.6515536111902664
131
+ name: Pearson Max
132
+ - type: spearman_max
133
+ value: 0.6357551120651417
134
+ name: Spearman Max
135
+ ---
136
+
137
+ # SentenceTransformer
138
+
139
+ This is a [sentence-transformers](https://www.SBERT.net) model trained on the [sentence-transformers/stsb](https://huggingface.co/datasets/sentence-transformers/stsb) dataset. It maps sentences & paragraphs to a 512-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
140
+
141
+ ## Model Details
142
+
143
+ ### Model Description
144
+ - **Model Type:** Sentence Transformer
145
+ <!-- - **Base model:** [Unknown](https://huggingface.co/unknown) -->
146
+ - **Maximum Sequence Length:** None tokens
147
+ - **Output Dimensionality:** 512 tokens
148
+ - **Similarity Function:** Cosine Similarity
149
+ - **Training Dataset:**
150
+ - [sentence-transformers/stsb](https://huggingface.co/datasets/sentence-transformers/stsb)
151
+ - **Language:** en
152
+ <!-- - **License:** Unknown -->
153
+
154
+ ### Model Sources
155
+
156
+ - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
157
+ - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
158
+ - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
159
+
160
+ ### Full Model Architecture
161
+
162
+ ```
163
+ SentenceTransformer(
164
+ (0): BoW()
165
+ (1): Dense({'in_features': 25000, 'out_features': 768, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
166
+ (2): Dense({'in_features': 768, 'out_features': 512, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
167
+ )
168
+ ```
169
+
170
+ ## Usage
171
+
172
+ ### Direct Usage (Sentence Transformers)
173
+
174
+ First install the Sentence Transformers library:
175
+
176
+ ```bash
177
+ pip install -U sentence-transformers
178
+ ```
179
+
180
+ Then you can load this model and run inference.
181
+ ```python
182
+ from sentence_transformers import SentenceTransformer
183
+
184
+ # Download from the 🤗 Hub
185
+ model = SentenceTransformer("tomaarsen/wikipedia-tf-idf-bow")
186
+ # Run inference
187
+ sentences = [
188
+ 'A cat is on a robot.',
189
+ 'A cat is pouncing on a trampoline.',
190
+ 'A woman is applying eye shadow.',
191
+ ]
192
+ embeddings = model.encode(sentences)
193
+ print(embeddings.shape)
194
+ # [3, 512]
195
+
196
+ # Get the similarity scores for the embeddings
197
+ similarities = model.similarity(embeddings)
198
+ print(similarities.shape)
199
+ # [3, 3]
200
+ ```
201
+
202
+ <!--
203
+ ### Direct Usage (Transformers)
204
+
205
+ <details><summary>Click to see the direct usage in Transformers</summary>
206
+
207
+ </details>
208
+ -->
209
+
210
+ <!--
211
+ ### Downstream Usage (Sentence Transformers)
212
+
213
+ You can finetune this model on your own dataset.
214
+
215
+ <details><summary>Click to expand</summary>
216
+
217
+ </details>
218
+ -->
219
+
220
+ <!--
221
+ ### Out-of-Scope Use
222
+
223
+ *List how the model may foreseeably be misused and address what users ought not to do with the model.*
224
+ -->
225
+
226
+ ## Evaluation
227
+
228
+ ### Metrics
229
+
230
+ #### Semantic Similarity
231
+ * Dataset: `sts-dev`
232
+ * Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
233
+
234
+ | Metric | Value |
235
+ |:--------------------|:-----------|
236
+ | pearson_cosine | 0.7328 |
237
+ | **spearman_cosine** | **0.7337** |
238
+ | pearson_manhattan | 0.5142 |
239
+ | spearman_manhattan | 0.5088 |
240
+ | pearson_euclidean | 0.5143 |
241
+ | spearman_euclidean | 0.5094 |
242
+ | pearson_dot | 0.5691 |
243
+ | spearman_dot | 0.6686 |
244
+ | pearson_max | 0.7328 |
245
+ | spearman_max | 0.7337 |
246
+
247
+ #### Semantic Similarity
248
+ * Dataset: `sts-test`
249
+ * Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
250
+
251
+ | Metric | Value |
252
+ |:--------------------|:-----------|
253
+ | pearson_cosine | 0.6516 |
254
+ | **spearman_cosine** | **0.6358** |
255
+ | pearson_manhattan | 0.4104 |
256
+ | spearman_manhattan | 0.4058 |
257
+ | pearson_euclidean | 0.4116 |
258
+ | spearman_euclidean | 0.4066 |
259
+ | pearson_dot | 0.4717 |
260
+ | spearman_dot | 0.5537 |
261
+ | pearson_max | 0.6516 |
262
+ | spearman_max | 0.6358 |
263
+
264
+ <!--
265
+ ## Bias, Risks and Limitations
266
+
267
+ *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
268
+ -->
269
+
270
+ <!--
271
+ ### Recommendations
272
+
273
+ *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
274
+ -->
275
+
276
+ ## Training Details
277
+
278
+ ### Training Dataset
279
+
280
+ #### sentence-transformers/stsb
281
+
282
+ * Dataset: [sentence-transformers/stsb](https://huggingface.co/datasets/sentence-transformers/stsb) at [d999f12](https://huggingface.co/datasets/sentence-transformers/stsb/tree/d999f12281623b0925506817d9bd85e88289218a)
283
+ * Size: 5,749 training samples
284
+ * Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code>
285
+ * Approximate statistics based on the first 1000 samples:
286
+ | | sentence1 | sentence2 | score |
287
+ |:--------|:------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------|:---------------------------------------------------------------|
288
+ | type | string | string | float |
289
+ | details | <ul><li>min: 16 characters</li><li>mean: 31.92 characters</li><li>max: 113 characters</li></ul> | <ul><li>min: 16 characters</li><li>mean: 31.51 characters</li><li>max: 94 characters</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.54</li><li>max: 1.0</li></ul> |
290
+ * Samples:
291
+ | sentence1 | sentence2 | score |
292
+ |:-----------------------------------------------------------|:----------------------------------------------------------------------|:------------------|
293
+ | <code>A plane is taking off.</code> | <code>An air plane is taking off.</code> | <code>1.0</code> |
294
+ | <code>A man is playing a large flute.</code> | <code>A man is playing a flute.</code> | <code>0.76</code> |
295
+ | <code>A man is spreading shreded cheese on a pizza.</code> | <code>A man is spreading shredded cheese on an uncooked pizza.</code> | <code>0.76</code> |
296
+ * Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/losses.html#cosinesimilarityloss) with these parameters:
297
+ ```json
298
+ {
299
+ "loss_fct": "torch.nn.modules.loss.MSELoss"
300
+ }
301
+ ```
302
+
303
+ ### Evaluation Dataset
304
+
305
+ #### sentence-transformers/stsb
306
+
307
+ * Dataset: [sentence-transformers/stsb](https://huggingface.co/datasets/sentence-transformers/stsb) at [d999f12](https://huggingface.co/datasets/sentence-transformers/stsb/tree/d999f12281623b0925506817d9bd85e88289218a)
308
+ * Size: 1,500 evaluation samples
309
+ * Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code>
310
+ * Approximate statistics based on the first 1000 samples:
311
+ | | sentence1 | sentence2 | score |
312
+ |:--------|:------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------|:---------------------------------------------------------------|
313
+ | type | string | string | float |
314
+ | details | <ul><li>min: 12 characters</li><li>mean: 57.37 characters</li><li>max: 144 characters</li></ul> | <ul><li>min: 17 characters</li><li>mean: 56.84 characters</li><li>max: 141 characters</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.47</li><li>max: 1.0</li></ul> |
315
+ * Samples:
316
+ | sentence1 | sentence2 | score |
317
+ |:--------------------------------------------------|:------------------------------------------------------|:------------------|
318
+ | <code>A man with a hard hat is dancing.</code> | <code>A man wearing a hard hat is dancing.</code> | <code>1.0</code> |
319
+ | <code>A young child is riding a horse.</code> | <code>A child is riding a horse.</code> | <code>0.95</code> |
320
+ | <code>A man is feeding a mouse to a snake.</code> | <code>The man is feeding a mouse to the snake.</code> | <code>1.0</code> |
321
+ * Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/losses.html#cosinesimilarityloss) with these parameters:
322
+ ```json
323
+ {
324
+ "loss_fct": "torch.nn.modules.loss.MSELoss"
325
+ }
326
+ ```
327
+
328
+ ### Training Hyperparameters
329
+ #### Non-Default Hyperparameters
330
+
331
+ - `eval_strategy`: steps
332
+ - `per_device_train_batch_size`: 32
333
+ - `per_device_eval_batch_size`: 32
334
+ - `num_train_epochs`: 1
335
+ - `warmup_ratio`: 0.1
336
+ - `fp16`: True
337
+
338
+ #### All Hyperparameters
339
+ <details><summary>Click to expand</summary>
340
+
341
+ - `overwrite_output_dir`: False
342
+ - `do_predict`: False
343
+ - `eval_strategy`: steps
344
+ - `prediction_loss_only`: False
345
+ - `per_device_train_batch_size`: 32
346
+ - `per_device_eval_batch_size`: 32
347
+ - `per_gpu_train_batch_size`: None
348
+ - `per_gpu_eval_batch_size`: None
349
+ - `gradient_accumulation_steps`: 1
350
+ - `eval_accumulation_steps`: None
351
+ - `learning_rate`: 5e-05
352
+ - `weight_decay`: 0.0
353
+ - `adam_beta1`: 0.9
354
+ - `adam_beta2`: 0.999
355
+ - `adam_epsilon`: 1e-08
356
+ - `max_grad_norm`: 1.0
357
+ - `num_train_epochs`: 1
358
+ - `max_steps`: -1
359
+ - `lr_scheduler_type`: linear
360
+ - `lr_scheduler_kwargs`: {}
361
+ - `warmup_ratio`: 0.1
362
+ - `warmup_steps`: 0
363
+ - `log_level`: passive
364
+ - `log_level_replica`: warning
365
+ - `log_on_each_node`: True
366
+ - `logging_nan_inf_filter`: True
367
+ - `save_safetensors`: True
368
+ - `save_on_each_node`: False
369
+ - `save_only_model`: False
370
+ - `no_cuda`: False
371
+ - `use_cpu`: False
372
+ - `use_mps_device`: False
373
+ - `seed`: 42
374
+ - `data_seed`: None
375
+ - `jit_mode_eval`: False
376
+ - `use_ipex`: False
377
+ - `bf16`: False
378
+ - `fp16`: True
379
+ - `fp16_opt_level`: O1
380
+ - `half_precision_backend`: auto
381
+ - `bf16_full_eval`: False
382
+ - `fp16_full_eval`: False
383
+ - `tf32`: None
384
+ - `local_rank`: 0
385
+ - `ddp_backend`: None
386
+ - `tpu_num_cores`: None
387
+ - `tpu_metrics_debug`: False
388
+ - `debug`: []
389
+ - `dataloader_drop_last`: False
390
+ - `dataloader_num_workers`: 0
391
+ - `dataloader_prefetch_factor`: None
392
+ - `past_index`: -1
393
+ - `disable_tqdm`: False
394
+ - `remove_unused_columns`: True
395
+ - `label_names`: None
396
+ - `load_best_model_at_end`: False
397
+ - `ignore_data_skip`: False
398
+ - `fsdp`: []
399
+ - `fsdp_min_num_params`: 0
400
+ - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
401
+ - `fsdp_transformer_layer_cls_to_wrap`: None
402
+ - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
403
+ - `deepspeed`: None
404
+ - `label_smoothing_factor`: 0.0
405
+ - `optim`: adamw_torch
406
+ - `optim_args`: None
407
+ - `adafactor`: False
408
+ - `group_by_length`: False
409
+ - `length_column_name`: length
410
+ - `ddp_find_unused_parameters`: None
411
+ - `ddp_bucket_cap_mb`: None
412
+ - `ddp_broadcast_buffers`: None
413
+ - `dataloader_pin_memory`: True
414
+ - `dataloader_persistent_workers`: False
415
+ - `skip_memory_metrics`: True
416
+ - `use_legacy_prediction_loop`: False
417
+ - `push_to_hub`: False
418
+ - `resume_from_checkpoint`: None
419
+ - `hub_model_id`: None
420
+ - `hub_strategy`: every_save
421
+ - `hub_private_repo`: False
422
+ - `hub_always_push`: False
423
+ - `gradient_checkpointing`: False
424
+ - `gradient_checkpointing_kwargs`: None
425
+ - `include_inputs_for_metrics`: False
426
+ - `eval_do_concat_batches`: True
427
+ - `fp16_backend`: auto
428
+ - `push_to_hub_model_id`: None
429
+ - `push_to_hub_organization`: None
430
+ - `mp_parameters`:
431
+ - `auto_find_batch_size`: False
432
+ - `full_determinism`: False
433
+ - `torchdynamo`: None
434
+ - `ray_scope`: last
435
+ - `ddp_timeout`: 1800
436
+ - `torch_compile`: False
437
+ - `torch_compile_backend`: None
438
+ - `torch_compile_mode`: None
439
+ - `dispatch_batches`: None
440
+ - `split_batches`: None
441
+ - `include_tokens_per_second`: False
442
+ - `include_num_input_tokens_seen`: False
443
+ - `neftune_noise_alpha`: None
444
+ - `optim_target_modules`: None
445
+ - `batch_sampler`: batch_sampler
446
+ - `multi_dataset_batch_sampler`: proportional
447
+
448
+ </details>
449
+
450
+ ### Training Logs
451
+ | Epoch | Step | Training Loss | loss | sts-dev_spearman_cosine | sts-test_spearman_cosine |
452
+ |:------:|:----:|:-------------:|:------:|:-----------------------:|:------------------------:|
453
+ | 0.5556 | 100 | 0.0725 | 0.0436 | 0.7337 | - |
454
+ | 1.0 | 180 | - | - | - | 0.6358 |
455
+
456
+
457
+ ### Environmental Impact
458
+ Carbon emissions were measured using [CodeCarbon](https://github.com/mlco2/codecarbon).
459
+ - **Energy Consumed**: 0.000 kWh
460
+ - **Carbon Emitted**: 0.000 kg of CO2
461
+ - **Hours Used**: 0.002 hours
462
+
463
+ ### Training Hardware
464
+ - **On Cloud**: No
465
+ - **GPU Model**: 1 x NVIDIA GeForce RTX 3090
466
+ - **CPU Model**: 13th Gen Intel(R) Core(TM) i7-13700K
467
+ - **RAM Size**: 31.78 GB
468
+
469
+ ### Framework Versions
470
+ - Python: 3.11.6
471
+ - Sentence Transformers: 3.0.0.dev0
472
+ - Transformers: 4.41.0.dev0
473
+ - PyTorch: 2.3.0+cu121
474
+ - Accelerate: 0.26.1
475
+ - Datasets: 2.18.0
476
+ - Tokenizers: 0.19.1
477
+
478
+ ## Citation
479
+
480
+ ### BibTeX
481
+
482
+ #### Sentence Transformers
483
+ ```bibtex
484
+ @inproceedings{reimers-2019-sentence-bert,
485
+ title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
486
+ author = "Reimers, Nils and Gurevych, Iryna",
487
+ booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
488
+ month = "11",
489
+ year = "2019",
490
+ publisher = "Association for Computational Linguistics",
491
+ url = "https://arxiv.org/abs/1908.10084",
492
+ }
493
+ ```
494
+
495
+ <!--
496
+ ## Glossary
497
+
498
+ *Clearly define terms in order to be accessible across audiences.*
499
+ -->
500
+
501
+ <!--
502
+ ## Model Card Authors
503
+
504
+ *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
505
+ -->
506
+
507
+ <!--
508
+ ## Model Card Contact
509
+
510
+ *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
511
+ -->
config_sentence_transformers.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "__version__": {
3
+ "sentence_transformers": "3.0.0.dev0",
4
+ "transformers": "4.41.0.dev0",
5
+ "pytorch": "2.3.0+cu121"
6
+ },
7
+ "prompts": {},
8
+ "default_prompt_name": null,
9
+ "similarity_fn_name": null
10
+ }
modules.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "0_BoW",
6
+ "type": "sentence_transformers.models.BoW"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_Dense",
12
+ "type": "sentence_transformers.models.Dense"
13
+ },
14
+ {
15
+ "idx": 2,
16
+ "name": "2",
17
+ "path": "2_Dense",
18
+ "type": "sentence_transformers.models.Dense"
19
+ }
20
+ ]