pankajrajdeo commited on
Commit
4f12ec1
1 Parent(s): 451c72e

Add new SentenceTransformer model

Browse files
1_Pooling/config.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "word_embedding_dimension": 384,
3
+ "pooling_mode_cls_token": false,
4
+ "pooling_mode_mean_tokens": true,
5
+ "pooling_mode_max_tokens": false,
6
+ "pooling_mode_mean_sqrt_len_tokens": false,
7
+ "pooling_mode_weightedmean_tokens": false,
8
+ "pooling_mode_lasttoken": false,
9
+ "include_prompt": true
10
+ }
README.md ADDED
@@ -0,0 +1,487 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - sentence-transformers
4
+ - sentence-similarity
5
+ - feature-extraction
6
+ - generated_from_trainer
7
+ - dataset_size:8994
8
+ - loss:MultipleNegativesRankingLoss
9
+ base_model: pankajrajdeo/UMLS-Pubmed-ST-TCE-Epoch-2
10
+ widget:
11
+ - source_sentence: Real estate data to analyse the relationship between property prices,
12
+ sustainability levels and socio-economic indicators. Recent studies have sought
13
+ to explore the relationship between environmental and financial performance, in
14
+ particular the relationship between the energy efficiency level of a building
15
+ and its financial value. The present real estate dataset contains 43 variables
16
+ of repeat sales transactions, energy performance certificate (EPC) rating, index
17
+ of multiple deprivation (IMD), and geographical location of properties in England
18
+ and Wales involved in a total of 4,201 transactions from 1995 to 2012. This dataset
19
+ enables researchers and practitioners to further explore important questions regarding
20
+ the nexus between the real estate industry, sustainability levels, and socio-economic
21
+ aspects. Due to the scarcity of publicly available quality real estate data, the
22
+ dataset detailed in this article may play a relevant role by becoming easily discoverable,
23
+ clearly explained, and structured to be ready to be used by researchers, analysts,
24
+ and policymakers. The empirical analysis of the economic case for energy-efficient
25
+ dwellings in the UK private rental market performed in Fuerst, et al. is based
26
+ on this dataset.
27
+ sentences:
28
+ - How do congenital glycosylation disorders impact the development and function
29
+ of various bodily systems, and what are the implications for patient care and
30
+ management?
31
+ - How do environmental and socio-economic factors influence property prices and
32
+ what are the implications for sustainable development policies?
33
+ - How do anisotropic properties of organic semiconductors influence their kinetic
34
+ behavior and potential applications?
35
+ - source_sentence: 'Mitochondrial DNA sequence and gene organization in the [corrected]
36
+ Australian blacklip [corrected] abalone Haliotis rubra (leach). The complete mitochondrial
37
+ DNA of the blacklip abalone Haliotis rubra (Gastropoda: Mollusca) was cloned and
38
+ 16,907 base pairs were sequenced. The sequence represents an estimated 99.85%
39
+ of the mitochondrial genome, and contains 2 ribosomal RNA, 22 transfer RNA, and
40
+ 13 protein-coding genes found in other metazoan mtDNA. An AT tandem repeat and
41
+ a possible C-rich domain within the putative control region could not be fully
42
+ sequenced. The H. rubra mtDNA gene order is novel for mollusks, separated from
43
+ the black chiton Katharina tunicata by the individual translocations of 3 tRNAs.
44
+ Compared with other mtDNA regions, sequences from the ATP8, NAD2, NAD4L, NAD6,
45
+ and 12S rRNA genes, as well as the control region, are the most variable among
46
+ representatives from Mollusca, Arthropoda, and Rhynchonelliformea, with similar
47
+ mtDNA arrangements to H. rubra. These sequences are being evaluated as genetic
48
+ markers within commercially important Haliotis species, and some applications
49
+ and considerations for their use are discussed.'
50
+ sentences:
51
+ - What are the potential mechanisms underlying the high variability of certain mitochondrial
52
+ DNA sequences, such as those found in the ATP8, NAD2, NAD4L, NAD6, and 12S rRNA
53
+ genes, and how might this impact their use as genetic markers?
54
+ - How do interactions between an individual's genetic makeup and their diet influence
55
+ the development of their gut microbiome?
56
+ - How do animal models contribute to our understanding of human metabolic disorders
57
+ and what are the implications for developing therapeutic interventions?
58
+ - source_sentence: 'Maxillary lateral incisor with two roots: a case report. Although
59
+ the dental literature has indicated that 100% of maxillary lateral incisors have
60
+ a single canal anatomy, it is possible for these teeth to have extra canals. These
61
+ extra canals must be identified and debrided to prevent endodontic failure. This
62
+ report presents an uncommon case involving a maxillary lateral incisor with two
63
+ roots. Even when the frequency of radicular anatomy abnormality is extremely low,
64
+ dentists must consider the possibility that a tooth has extra root canals or even
65
+ extra roots.'
66
+ sentences:
67
+ - What are the underlying neural mechanisms by which early life experiences, such
68
+ as malnutrition or enriched environments, shape synaptic plasticity and memory
69
+ formation in adulthood?
70
+ - How do the complexities of identifying and managing multiple root canals or roots
71
+ in a single tooth impact endodontic procedures and success rates?
72
+ - What are the molecular mechanisms underlying the aberrant processing and signaling
73
+ of truncated receptor polypeptides, and how can these be targeted therapeutically?
74
+ - source_sentence: 'Novel methodology for the evaluation of symptoms reported by patients
75
+ with newly diagnosed atrial fibrillation: Application of natural language processing
76
+ to electronic medical records data. INTRODUCTION: Understanding symptom patterns
77
+ in atrial fibrillation patients. The incidence rate of symptom reports was highest
78
+ at 0-3 months post-diagnosis and lower at >3-6 and >6-12 months (pre-defined timepoints).
79
+ Across all time periods, the most common symptoms were dyspnea or shortness of
80
+ breath, followed by syncope, presyncope, lightheadedness, or dizziness. Similar
81
+ temporal patterns of symptom reports were observed among patients with prescriptions
82
+ for dronedarone or sotalol as first-line treatment. CONCLUSION: This study illustrates
83
+ that NLP can be applied to EMR data to characterize symptom reports in patients
84
+ with incident AF, and the potential for these methods to inform comparative effectiveness.'
85
+ sentences:
86
+ - How do electronic medical records and natural language processing contribute to
87
+ understanding patient-reported symptoms across various chronic conditions?
88
+ - How do advances in endoscopic technologies impact the diagnosis and treatment
89
+ of gastrointestinal diseases?
90
+ - What are the fundamental limitations and challenges associated with generating
91
+ high-quality 3D phase-only holograms using deep learning-based methods, and how
92
+ might these be addressed through innovative dataset creation or training protocols?
93
+ - source_sentence: 'Integrating virtual reality video games into practice: clinicians''
94
+ experiences. The Nintendo Wii is a popular virtual reality (VR) video gaming system
95
+ in rehabilitation practice and research. As evidence emerges related to its effectiveness
96
+ as a physical therapy training method, clinicians require information about the
97
+ pragmatics of its use in practice. The purpose of this descriptive qualitative
98
+ study is to explore observations and insights from a sample of physical therapists
99
+ (PTs) working with children with acquired brain injury regarding practical implications
100
+ of using the Wii as a physical therapy intervention. Six PTs employed at a children''s
101
+ rehabilitation center participated in semi-structured interviews, which were transcribed
102
+ and analyzed using content analysis. Two themes summarize the practical implications
103
+ of Wii use: 1) technology meets clinical practice; and 2) onus is on the therapist.
104
+ Therapists described both beneficial and challenging implications arising from
105
+ the intersection of technology and practice, and reported the personal commitment
106
+ required to orient oneself to the gaming system and capably implement this intervention.
107
+ Findings include issues that may be relevant to professional development in a
108
+ broader rehabilitation context, including suggestions for the content of educational
109
+ initiatives and the need for institutional support from managers in the form of
110
+ physical resources for VR implementation.'
111
+ sentences:
112
+ - What are the broader implications of using viral vectors like rVSV for vaccine
113
+ development against emerging and re-emerging pathogens, particularly those with
114
+ high mortality rates?
115
+ - How do variations in surface preparation and condensation techniques affect the
116
+ bonding and mechanical integrity of amalgam repairs?
117
+ - How do clinicians balance the integration of emerging technologies with established
118
+ clinical practices?
119
+ pipeline_tag: sentence-similarity
120
+ library_name: sentence-transformers
121
+ ---
122
+
123
+ # SentenceTransformer based on pankajrajdeo/UMLS-Pubmed-ST-TCE-Epoch-2
124
+
125
+ This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [pankajrajdeo/UMLS-Pubmed-ST-TCE-Epoch-2](https://huggingface.co/pankajrajdeo/UMLS-Pubmed-ST-TCE-Epoch-2). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
126
+
127
+ ## Model Details
128
+
129
+ ### Model Description
130
+ - **Model Type:** Sentence Transformer
131
+ - **Base model:** [pankajrajdeo/UMLS-Pubmed-ST-TCE-Epoch-2](https://huggingface.co/pankajrajdeo/UMLS-Pubmed-ST-TCE-Epoch-2) <!-- at revision bd0601028c5297a85cd3e6cfe15479749e00044a -->
132
+ - **Maximum Sequence Length:** 1024 tokens
133
+ - **Output Dimensionality:** 384 dimensions
134
+ - **Similarity Function:** Cosine Similarity
135
+ <!-- - **Training Dataset:** Unknown -->
136
+ <!-- - **Language:** Unknown -->
137
+ <!-- - **License:** Unknown -->
138
+
139
+ ### Model Sources
140
+
141
+ - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
142
+ - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
143
+ - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
144
+
145
+ ### Full Model Architecture
146
+
147
+ ```
148
+ SentenceTransformer(
149
+ (0): Transformer({'max_seq_length': 1024, 'do_lower_case': False}) with Transformer model: BertModel
150
+ (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
151
+ )
152
+ ```
153
+
154
+ ## Usage
155
+
156
+ ### Direct Usage (Sentence Transformers)
157
+
158
+ First install the Sentence Transformers library:
159
+
160
+ ```bash
161
+ pip install -U sentence-transformers
162
+ ```
163
+
164
+ Then you can load this model and run inference.
165
+ ```python
166
+ from sentence_transformers import SentenceTransformer
167
+
168
+ # Download from the 🤗 Hub
169
+ model = SentenceTransformer("pankajrajdeo/UMLS-Pubmed-ST-TCE-Epoch-2-QA_10K")
170
+ # Run inference
171
+ sentences = [
172
+ "Integrating virtual reality video games into practice: clinicians' experiences. The Nintendo Wii is a popular virtual reality (VR) video gaming system in rehabilitation practice and research. As evidence emerges related to its effectiveness as a physical therapy training method, clinicians require information about the pragmatics of its use in practice. The purpose of this descriptive qualitative study is to explore observations and insights from a sample of physical therapists (PTs) working with children with acquired brain injury regarding practical implications of using the Wii as a physical therapy intervention. Six PTs employed at a children's rehabilitation center participated in semi-structured interviews, which were transcribed and analyzed using content analysis. Two themes summarize the practical implications of Wii use: 1) technology meets clinical practice; and 2) onus is on the therapist. Therapists described both beneficial and challenging implications arising from the intersection of technology and practice, and reported the personal commitment required to orient oneself to the gaming system and capably implement this intervention. Findings include issues that may be relevant to professional development in a broader rehabilitation context, including suggestions for the content of educational initiatives and the need for institutional support from managers in the form of physical resources for VR implementation.",
173
+ 'How do clinicians balance the integration of emerging technologies with established clinical practices?',
174
+ 'How do variations in surface preparation and condensation techniques affect the bonding and mechanical integrity of amalgam repairs?',
175
+ ]
176
+ embeddings = model.encode(sentences)
177
+ print(embeddings.shape)
178
+ # [3, 384]
179
+
180
+ # Get the similarity scores for the embeddings
181
+ similarities = model.similarity(embeddings, embeddings)
182
+ print(similarities.shape)
183
+ # [3, 3]
184
+ ```
185
+
186
+ <!--
187
+ ### Direct Usage (Transformers)
188
+
189
+ <details><summary>Click to see the direct usage in Transformers</summary>
190
+
191
+ </details>
192
+ -->
193
+
194
+ <!--
195
+ ### Downstream Usage (Sentence Transformers)
196
+
197
+ You can finetune this model on your own dataset.
198
+
199
+ <details><summary>Click to expand</summary>
200
+
201
+ </details>
202
+ -->
203
+
204
+ <!--
205
+ ### Out-of-Scope Use
206
+
207
+ *List how the model may foreseeably be misused and address what users ought not to do with the model.*
208
+ -->
209
+
210
+ <!--
211
+ ## Bias, Risks and Limitations
212
+
213
+ *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
214
+ -->
215
+
216
+ <!--
217
+ ### Recommendations
218
+
219
+ *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
220
+ -->
221
+
222
+ ## Training Details
223
+
224
+ ### Training Dataset
225
+
226
+ #### Unnamed Dataset
227
+
228
+
229
+ * Size: 8,994 training samples
230
+ * Columns: <code>anchor</code> and <code>positive</code>
231
+ * Approximate statistics based on the first 1000 samples:
232
+ | | anchor | positive |
233
+ |:--------|:-------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
234
+ | type | string | string |
235
+ | details | <ul><li>min: 19 tokens</li><li>mean: 268.28 tokens</li><li>max: 808 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 27.58 tokens</li><li>max: 51 tokens</li></ul> |
236
+ * Samples:
237
+ | anchor | positive |
238
+ |:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
239
+ | <code>Biallelic variants in DNA2 cause microcephalic primordial dwarfism. Microcephalic primordial dwarfism and c.74+4A>C) found in these individuals substantially impair DNA2 transcript splicing. Additionally, we identify a missense variant, affecting a residue of the ATP-dependent helicase domain that is highly conserved between humans and yeast, with the resulting substitution (p.Thr655Ala) predicted to directly impact ATP/ADP (adenosine diphosphate) binding by DNA2. Our findings support the pathogenicity of these variants as biallelic hypomorphic mutations, establishing DNA2 as an MPD disease gene.</code> | <code>How do genetic variations in genes involved in DNA replication and repair contribute to human developmental disorders?</code> |
240
+ | <code>Psychological Distress as a Primer for Sexual Risk Taking Among Emerging Adults. Emerging adults experience increased morbidity as a result of psychological distress and risky sexual behavior. This study examines how sexual behaviors (for example, condom use inconsistency and past year STI history) differ among emerging adults with low, moderate, and high psychological distress. Participants are 251,254 emerging adults attending colleges and universities in the United States who participated in the National College Health Assessment (NCHA). Findings suggest a dose-response relationship between psychological distress, condom use inconsistency, and past STI history, such that an association between greater psychological distress and condom use inconsistency and/or past year history of sexually transmitted infections (STIs).</code> | <code>How do mental health factors influence the likelihood of engaging in high-risk behaviors among young adults?</code> |
241
+ | <code>Long-Term Safety of Teriflunomide in Multiple Sclerosis Patients: Results of Prospective Comparative Studies in Three European Countries. BACKGROUND AND OBJECTIVES: Teriflunomide is a disease-modifying therapy (DMT) for multiple sclerosis (MS). This post authorisation safety study assessed risks of adverse events of special interest (AESI) associated with teriflunomide use. METHODS: Secondary use of individual data from the Danish MS Registry (DMSR), the French National Health Data System (SNDS), the Belgian national database of health care claims (AIM-IMA) and the Belgian Treatments in MS Registry (Beltrims). We included patients treated with a DMT at the date of teriflunomide reimbursement or initiating another DMT. Adjusted hazard rates (aHR) and 95% confidence intervals were derived from Cox models with time-dependent exposure comparing teriflunomide treatment with another DMT. RESULTS: Of 81 620 patients (72% women) included in the cohort, 22 324 (27%) were treated with teriflunom...</code> | <code>What are the potential mechanisms underlying the observed differences in risk profiles between teriflunomide and other disease-modifying therapies, particularly with regards to opportunistic infections and renal failure?</code> |
242
+ * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
243
+ ```json
244
+ {
245
+ "scale": 20.0,
246
+ "similarity_fct": "cos_sim"
247
+ }
248
+ ```
249
+
250
+ ### Evaluation Dataset
251
+
252
+ #### Unnamed Dataset
253
+
254
+
255
+ * Size: 1,000 evaluation samples
256
+ * Columns: <code>anchor</code> and <code>positive</code>
257
+ * Approximate statistics based on the first 1000 samples:
258
+ | | anchor | positive |
259
+ |:--------|:-------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
260
+ | type | string | string |
261
+ | details | <ul><li>min: 20 tokens</li><li>mean: 272.12 tokens</li><li>max: 935 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 27.77 tokens</li><li>max: 56 tokens</li></ul> |
262
+ * Samples:
263
+ | anchor | positive |
264
+ |:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
265
+ | <code>Posttraumatic Cognitions and Suicidal Ideation among Veterans receiving PTSD Treatment. With approximately 20 veteran suicide deaths per day, suicidal ideation (SI) among veterans is an important concern. Posttraumatic stress disorder (PTSD) is associated with SI among veterans, yet mechanisms of this relationship remain unclear. Negative posttraumatic cognitions contribute to the development and maintenance of PTSD, yet no studies have prospectively examined the relationship between posttraumatic cognitions and SI. Veterans (N = 177; 66% Male) participating in a 3-week intensive outpatient program for PTSD completed assessments of PTSD severity, depressive symptoms, SI, and posttraumatic cognitions. Negative posttraumatic cognitions about the self significantly predicted SI at posttreatment, controlling for pretreatment levels of SI, depression, and PTSD symptom severity. Self-blame and negative posttraumatic cognitions about others/world did not predict SI prospectively. Negative pos...</code> | <code>What are the underlying psychological mechanisms by which self-blame and negative cognitions about oneself or others/world influence suicidal ideation in veterans with PTSD?</code> |
266
+ | <code>Bilirubin increases insulin sensitivity in leptin-receptor deficient and diet-induced obese mice through suppression of ER stress and chronic inflammation. Obesity-induced endoplasmic reticulum (ER) stress causes chronic inflammation in adipose tissue and steatosis in the liver, and eventually leads to insulin resistance and type 2 diabetes (T2D). The goal of this study was to understand the mechanisms by which administration of bilirubin, a powerful antioxidant, reduces hyperglycemia and ameliorates obesity in leptin-receptor-deficient (db/db) and diet-induced obese (DIO) mouse models. db/db or DIO mice were injected with bilirubin or vehicle ip. Blood glucose and body weight were measured. Activation of insulin-signaling pathways, expression of inflammatory cytokines, and ER stress markers were measured in skeletal muscle, adipose tissue, and liver of mice. Bilirubin administration significantly reduced hyperglycemia and increased insulin sensitivity in db/db mice. Bilirubin treatmen...</code> | <code>What cellular pathways and stress responses contribute to the development of insulin resistance in obesity, and how can they be targeted therapeutically?</code> |
267
+ | <code>Repair strength of dental amalgams. This study tested the hypothesis that newly triturated amalgam condensed vertically on old amalgam was essential for establishing a bond between the new and old amalgams. Twelve rectangular bars were prepared with Dispersalloy and Tytin to establish their baseline flexure strength values. An additional 12 specimens were made and separated into 24 equal halves. All fracture surfaces were abraded with a flat end fissure bur. Twelve surfaces were paired with the original amalgam, and the remaining 12 surfaces were repaired with a different amalgam. At first, freshly triturated amalgam was condensed vertically on the floor of the specimen mold (Group A). The majority of specimens repaired with Group A failed to establish bond at the repair interface. All repair surfaces were abraded again and prepared by a second method. A metal spacer was used to create a four-wall cavity to facilitate vertical condensation directly on the repair surface (Group B). The ...</code> | <code>How do variations in surface preparation and condensation techniques affect the bonding and mechanical integrity of amalgam repairs?</code> |
268
+ * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
269
+ ```json
270
+ {
271
+ "scale": 20.0,
272
+ "similarity_fct": "cos_sim"
273
+ }
274
+ ```
275
+
276
+ ### Training Hyperparameters
277
+ #### Non-Default Hyperparameters
278
+
279
+ - `eval_strategy`: epoch
280
+ - `per_device_train_batch_size`: 16
281
+ - `per_device_eval_batch_size`: 16
282
+ - `learning_rate`: 2e-05
283
+ - `num_train_epochs`: 5
284
+ - `fp16`: True
285
+ - `load_best_model_at_end`: True
286
+ - `resume_from_checkpoint`: True
287
+
288
+ #### All Hyperparameters
289
+ <details><summary>Click to expand</summary>
290
+
291
+ - `overwrite_output_dir`: False
292
+ - `do_predict`: False
293
+ - `eval_strategy`: epoch
294
+ - `prediction_loss_only`: True
295
+ - `per_device_train_batch_size`: 16
296
+ - `per_device_eval_batch_size`: 16
297
+ - `per_gpu_train_batch_size`: None
298
+ - `per_gpu_eval_batch_size`: None
299
+ - `gradient_accumulation_steps`: 1
300
+ - `eval_accumulation_steps`: None
301
+ - `torch_empty_cache_steps`: None
302
+ - `learning_rate`: 2e-05
303
+ - `weight_decay`: 0.0
304
+ - `adam_beta1`: 0.9
305
+ - `adam_beta2`: 0.999
306
+ - `adam_epsilon`: 1e-08
307
+ - `max_grad_norm`: 1.0
308
+ - `num_train_epochs`: 5
309
+ - `max_steps`: -1
310
+ - `lr_scheduler_type`: linear
311
+ - `lr_scheduler_kwargs`: {}
312
+ - `warmup_ratio`: 0.0
313
+ - `warmup_steps`: 0
314
+ - `log_level`: passive
315
+ - `log_level_replica`: warning
316
+ - `log_on_each_node`: True
317
+ - `logging_nan_inf_filter`: True
318
+ - `save_safetensors`: True
319
+ - `save_on_each_node`: False
320
+ - `save_only_model`: False
321
+ - `restore_callback_states_from_checkpoint`: False
322
+ - `no_cuda`: False
323
+ - `use_cpu`: False
324
+ - `use_mps_device`: False
325
+ - `seed`: 42
326
+ - `data_seed`: None
327
+ - `jit_mode_eval`: False
328
+ - `use_ipex`: False
329
+ - `bf16`: False
330
+ - `fp16`: True
331
+ - `fp16_opt_level`: O1
332
+ - `half_precision_backend`: auto
333
+ - `bf16_full_eval`: False
334
+ - `fp16_full_eval`: False
335
+ - `tf32`: None
336
+ - `local_rank`: 0
337
+ - `ddp_backend`: None
338
+ - `tpu_num_cores`: None
339
+ - `tpu_metrics_debug`: False
340
+ - `debug`: []
341
+ - `dataloader_drop_last`: False
342
+ - `dataloader_num_workers`: 0
343
+ - `dataloader_prefetch_factor`: None
344
+ - `past_index`: -1
345
+ - `disable_tqdm`: False
346
+ - `remove_unused_columns`: True
347
+ - `label_names`: None
348
+ - `load_best_model_at_end`: True
349
+ - `ignore_data_skip`: False
350
+ - `fsdp`: []
351
+ - `fsdp_min_num_params`: 0
352
+ - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
353
+ - `fsdp_transformer_layer_cls_to_wrap`: None
354
+ - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
355
+ - `deepspeed`: None
356
+ - `label_smoothing_factor`: 0.0
357
+ - `optim`: adamw_torch
358
+ - `optim_args`: None
359
+ - `adafactor`: False
360
+ - `group_by_length`: False
361
+ - `length_column_name`: length
362
+ - `ddp_find_unused_parameters`: None
363
+ - `ddp_bucket_cap_mb`: None
364
+ - `ddp_broadcast_buffers`: False
365
+ - `dataloader_pin_memory`: True
366
+ - `dataloader_persistent_workers`: False
367
+ - `skip_memory_metrics`: True
368
+ - `use_legacy_prediction_loop`: False
369
+ - `push_to_hub`: False
370
+ - `resume_from_checkpoint`: True
371
+ - `hub_model_id`: None
372
+ - `hub_strategy`: every_save
373
+ - `hub_private_repo`: False
374
+ - `hub_always_push`: False
375
+ - `gradient_checkpointing`: False
376
+ - `gradient_checkpointing_kwargs`: None
377
+ - `include_inputs_for_metrics`: False
378
+ - `include_for_metrics`: []
379
+ - `eval_do_concat_batches`: True
380
+ - `fp16_backend`: auto
381
+ - `push_to_hub_model_id`: None
382
+ - `push_to_hub_organization`: None
383
+ - `mp_parameters`:
384
+ - `auto_find_batch_size`: False
385
+ - `full_determinism`: False
386
+ - `torchdynamo`: None
387
+ - `ray_scope`: last
388
+ - `ddp_timeout`: 1800
389
+ - `torch_compile`: False
390
+ - `torch_compile_backend`: None
391
+ - `torch_compile_mode`: None
392
+ - `dispatch_batches`: None
393
+ - `split_batches`: None
394
+ - `include_tokens_per_second`: False
395
+ - `include_num_input_tokens_seen`: False
396
+ - `neftune_noise_alpha`: None
397
+ - `optim_target_modules`: None
398
+ - `batch_eval_metrics`: False
399
+ - `eval_on_start`: False
400
+ - `use_liger_kernel`: False
401
+ - `eval_use_gather_object`: False
402
+ - `average_tokens_across_devices`: False
403
+ - `prompts`: None
404
+ - `batch_sampler`: batch_sampler
405
+ - `multi_dataset_batch_sampler`: proportional
406
+
407
+ </details>
408
+
409
+ ### Training Logs
410
+ | Epoch | Step | Training Loss | Validation Loss |
411
+ |:------:|:----:|:-------------:|:---------------:|
412
+ | 0.1776 | 100 | 0.0228 | - |
413
+ | 0.3552 | 200 | 0.0096 | - |
414
+ | 0.5329 | 300 | 0.013 | - |
415
+ | 0.7105 | 400 | 0.0175 | - |
416
+ | 0.8881 | 500 | 0.0154 | - |
417
+ | 1.0 | 563 | - | 0.0096 |
418
+ | 1.0657 | 600 | 0.0132 | - |
419
+ | 1.2433 | 700 | 0.0056 | - |
420
+ | 1.4210 | 800 | 0.0071 | - |
421
+ | 1.5986 | 900 | 0.0081 | - |
422
+ | 1.7762 | 1000 | 0.011 | - |
423
+ | 1.9538 | 1100 | 0.0103 | - |
424
+ | 2.0 | 1126 | - | 0.0074 |
425
+ | 2.1314 | 1200 | 0.0149 | - |
426
+ | 2.3091 | 1300 | 0.01 | - |
427
+ | 2.4867 | 1400 | 0.008 | - |
428
+ | 2.6643 | 1500 | 0.0066 | - |
429
+ | 2.8419 | 1600 | 0.0097 | - |
430
+ | 3.0 | 1689 | - | 0.0058 |
431
+
432
+
433
+ ### Framework Versions
434
+ - Python: 3.10.12
435
+ - Sentence Transformers: 3.3.1
436
+ - Transformers: 4.46.2
437
+ - PyTorch: 2.5.1+cu121
438
+ - Accelerate: 1.1.1
439
+ - Datasets: 3.1.0
440
+ - Tokenizers: 0.20.3
441
+
442
+ ## Citation
443
+
444
+ ### BibTeX
445
+
446
+ #### Sentence Transformers
447
+ ```bibtex
448
+ @inproceedings{reimers-2019-sentence-bert,
449
+ title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
450
+ author = "Reimers, Nils and Gurevych, Iryna",
451
+ booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
452
+ month = "11",
453
+ year = "2019",
454
+ publisher = "Association for Computational Linguistics",
455
+ url = "https://arxiv.org/abs/1908.10084",
456
+ }
457
+ ```
458
+
459
+ #### MultipleNegativesRankingLoss
460
+ ```bibtex
461
+ @misc{henderson2017efficient,
462
+ title={Efficient Natural Language Response Suggestion for Smart Reply},
463
+ author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
464
+ year={2017},
465
+ eprint={1705.00652},
466
+ archivePrefix={arXiv},
467
+ primaryClass={cs.CL}
468
+ }
469
+ ```
470
+
471
+ <!--
472
+ ## Glossary
473
+
474
+ *Clearly define terms in order to be accessible across audiences.*
475
+ -->
476
+
477
+ <!--
478
+ ## Model Card Authors
479
+
480
+ *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
481
+ -->
482
+
483
+ <!--
484
+ ## Model Card Contact
485
+
486
+ *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
487
+ -->
added_tokens.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "[TEXT]": 32768,
3
+ "[YEAR_RANGE]": 32769
4
+ }
config.json ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "./Bioformer-MNRL-finetuned/checkpoint-1689",
3
+ "architectures": [
4
+ "BertModel"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "classifier_dropout": null,
8
+ "hidden_act": "gelu",
9
+ "hidden_dropout_prob": 0.1,
10
+ "hidden_size": 384,
11
+ "initializer_range": 0.02,
12
+ "intermediate_size": 1536,
13
+ "layer_norm_eps": 1e-12,
14
+ "max_position_embeddings": 1024,
15
+ "model_type": "bert",
16
+ "num_attention_heads": 6,
17
+ "num_hidden_layers": 16,
18
+ "pad_token_id": 0,
19
+ "position_embedding_type": "absolute",
20
+ "torch_dtype": "float32",
21
+ "transformers_version": "4.46.2",
22
+ "type_vocab_size": 2,
23
+ "use_cache": true,
24
+ "vocab_size": 32770
25
+ }
config_sentence_transformers.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "__version__": {
3
+ "sentence_transformers": "3.3.1",
4
+ "transformers": "4.46.2",
5
+ "pytorch": "2.5.1+cu121"
6
+ },
7
+ "prompts": {},
8
+ "default_prompt_name": null,
9
+ "similarity_fn_name": "cosine"
10
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:59bcc32d7e4d2476be2d0548606a1d719dd36165e3718bfc082646d2b8bd47ec
3
+ size 166100216
modules.json ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.models.Transformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_Pooling",
12
+ "type": "sentence_transformers.models.Pooling"
13
+ }
14
+ ]
sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 1024,
3
+ "do_lower_case": false
4
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,41 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "additional_special_tokens": [
3
+ "[TEXT]",
4
+ "[YEAR_RANGE]"
5
+ ],
6
+ "cls_token": {
7
+ "content": "[CLS]",
8
+ "lstrip": false,
9
+ "normalized": false,
10
+ "rstrip": false,
11
+ "single_word": false
12
+ },
13
+ "mask_token": {
14
+ "content": "[MASK]",
15
+ "lstrip": false,
16
+ "normalized": false,
17
+ "rstrip": false,
18
+ "single_word": false
19
+ },
20
+ "pad_token": {
21
+ "content": "[PAD]",
22
+ "lstrip": false,
23
+ "normalized": false,
24
+ "rstrip": false,
25
+ "single_word": false
26
+ },
27
+ "sep_token": {
28
+ "content": "[SEP]",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false
33
+ },
34
+ "unk_token": {
35
+ "content": "[UNK]",
36
+ "lstrip": false,
37
+ "normalized": false,
38
+ "rstrip": false,
39
+ "single_word": false
40
+ }
41
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,84 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "[PAD]",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "100": {
12
+ "content": "[UNK]",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "101": {
20
+ "content": "[CLS]",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "102": {
28
+ "content": "[SEP]",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "103": {
36
+ "content": "[MASK]",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ },
43
+ "32768": {
44
+ "content": "[TEXT]",
45
+ "lstrip": false,
46
+ "normalized": false,
47
+ "rstrip": false,
48
+ "single_word": false,
49
+ "special": true
50
+ },
51
+ "32769": {
52
+ "content": "[YEAR_RANGE]",
53
+ "lstrip": false,
54
+ "normalized": false,
55
+ "rstrip": false,
56
+ "single_word": false,
57
+ "special": true
58
+ }
59
+ },
60
+ "additional_special_tokens": [
61
+ "[TEXT]",
62
+ "[YEAR_RANGE]"
63
+ ],
64
+ "clean_up_tokenization_spaces": true,
65
+ "cls_token": "[CLS]",
66
+ "do_basic_tokenize": true,
67
+ "do_lower_case": false,
68
+ "mask_token": "[MASK]",
69
+ "max_length": 1024,
70
+ "model_max_length": 1024,
71
+ "never_split": null,
72
+ "pad_to_multiple_of": null,
73
+ "pad_token": "[PAD]",
74
+ "pad_token_type_id": 0,
75
+ "padding_side": "right",
76
+ "sep_token": "[SEP]",
77
+ "stride": 0,
78
+ "strip_accents": null,
79
+ "tokenize_chinese_chars": true,
80
+ "tokenizer_class": "BertTokenizer",
81
+ "truncation_side": "right",
82
+ "truncation_strategy": "longest_first",
83
+ "unk_token": "[UNK]"
84
+ }
vocab.txt ADDED
The diff for this file is too large to render. See raw diff