FINGU-AI commited on
Commit
90fc0d7
1 Parent(s): 620033d

Upload folder using huggingface_hub

Browse files
1_Pooling/config.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "word_embedding_dimension": 3584,
3
+ "pooling_mode_cls_token": false,
4
+ "pooling_mode_mean_tokens": false,
5
+ "pooling_mode_max_tokens": false,
6
+ "pooling_mode_mean_sqrt_len_tokens": false,
7
+ "pooling_mode_weightedmean_tokens": false,
8
+ "pooling_mode_lasttoken": true,
9
+ "include_prompt": true
10
+ }
README.md CHANGED
@@ -1,3 +1,450 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: Alibaba-NLP/gte-Qwen2-7B-instruct
3
+ datasets: []
4
+ language: []
5
+ library_name: sentence-transformers
6
+ pipeline_tag: sentence-similarity
7
+ tags:
8
+ - sentence-transformers
9
+ - sentence-similarity
10
+ - feature-extraction
11
+ - generated_from_trainer
12
+ - dataset_size:245133
13
+ - loss:MultipleNegativesRankingLoss
14
+ - loss:MultipleNegativesSymmetricRankingLoss
15
+ - loss:CoSENTLoss
16
+ widget:
17
+ - source_sentence: Awbere (woreda)
18
+ sentences:
19
+ - Counterfeit Son is a 2000 novel by Elaine Marie Alphin and was written for young
20
+ adults . It received a 2001 Edgar Award from the Mystery Writers of America for
21
+ Best Young Adult Mystery . It is a psychological thriller .
22
+ - Awbere ( Awbarre ) , ( also known as Teferi Ber ) , is one of the woredas in the
23
+ Somali Region of Ethiopia . Part of the Jijiga Zone , Awbere is bordered on the
24
+ southwest by Jijiga , on the west by the Shinile Zone , on the east by Somalia
25
+ , and on the southeast by Kebri Beyah . Towns in Awbere include Āwuberē , Derwonaji
26
+ , Lefe Isa , and Sheed Dheer . High points in this woreda include Sau ( 1863
27
+ meters ) , near the international border .
28
+ - 'Aleksandra Delcheva (Bulgarian: Александра Делчева) (born April 11, 1987) is
29
+ a Bulgarian volleyball player. She currently plays for Union Stade Français-Saint-Cloud
30
+ Paris in France.'
31
+ - source_sentence: Jim Berryman
32
+ sentences:
33
+ - Jim Berryman ( born February 7 , 1947 ) is a politician from the U.S. state of
34
+ Michigan . He is the current mayor of Adrian , Michigan . He previously served
35
+ as a member of the Michigan Senate from the 16th district from 1990 to 1998 and
36
+ as mayor of Adrian , Michigan from 1983 to 1990 . He was the minority whip in
37
+ the Senate from 1994 to 1998 . He is a Democrat and was the first one ever to
38
+ be elected to the Michigan Senate from Lenawee County .
39
+ - Frohnlach is located in Upper Franconia (Oberfranken) in the district of (Landkreis)
40
+ Coburg. It is the easternmost part of the municipality (Gemeinde) of Ebersdorf
41
+ bei Coburg and, with around 2,000 inhabitants, the largest district after Ebersdorf.
42
+ - 'A Ricetto was a small fortified area used in Italian villages for protection
43
+ of the residents in case of attack , particularly from marauders and bands of
44
+ soldiers and mercenaries from invading armies . Category : Italian architecture Category
45
+ : Fortifications by type Category : Fortifications in Italy'
46
+ - source_sentence: The Conquest of Space
47
+ sentences:
48
+ - 'Drakpa Changchub (Tibetan: གྲགས་པ་བྱང་ཆུབ, Wylie: Grags pa byang chub, 1356–1386)
49
+ was a ruler of Central Tibet in 1374–1381. He belonged to the Phagmodrupa Dynasty
50
+ which was the dominating regime in Tibet between 1354 and 1435.Drakpa Changchub
51
+ was the second son of Rinchen Dorje, a brother of the preceding regent Jamyang
52
+ Shakya Gyaltsen. His mother was Zina Tashi Kyi. Like the other Phagmodrupa rulers
53
+ he had a monastic upbringing, and was made abbot of Dansa Thel when fifteen years
54
+ of age.'
55
+ - The Conquest of Space is a 1949 speculative science book written by Willy Ley
56
+ and illustrated by Chesley Bonestell. The book contains a portfolio of paintings
57
+ by Bonestell depicting the possible future exploration of the solar system, with
58
+ explanatory text by Ley.
59
+ - VISP may refer to Virtual ISP , an internet service provider which resells the
60
+ services of another under a different brand name the Swiss town of Visp , population
61
+ 6700 vaccine-induced seropositivity , the medical concept of testing positive
62
+ for a disease after getting a vaccination against it ViSP , a cross-platform
63
+ software that allows prototyping and developing applications in visual tracking
64
+ and visual servoing .
65
+ - source_sentence: '[''Question: What is the greatest possible number of real roots
66
+ for a polynomial of the form $x^n + x^{n - 1} + \\dots + x + 1 = 0$, where $n$
67
+ is a positive integer?\nAnswer: Consider the polynomial $P(x) = x^n + x^{n - 1}
68
+ + \\dots + x + 1$.\nIf $x = 1$, then $P(x) = n + 1 > 0$, so $x = 1$ is not a root.\nIf
69
+ $x = -1$, then $P(x) = (-1)^n - 1$, which is equal to 0 only if $n$ is odd.\nSo,
70
+ if $n$ is even, then $P(x)$ has no real roots.\nIf $n$ is odd, then $P(x)$ has
71
+ one real root, namely $x = -1$.\nTherefore, the greatest possible number of real
72
+ roots for $P(x)$ is $\\boxed{1}$.\nThe answer is: 1\n\nQuestion: If there are
73
+ four primary schools in a town, with two schools capable of teaching 400 students
74
+ at a time and the other two schools capable of teaching 340 students at a time,
75
+ what is the total number of students that these four primary schools can accommodate
76
+ at a time?\nAnswer: Two schools can accommodate 400 students each, so their total
77
+ capacity is 2 * 400 = 800 students.\nThe other two schools can accommodate 340
78
+ students each, so their total capacity is 2 * 340 = 680 students.\nThe total capacity
79
+ of all four schools is 800 + 680 = 1480 students.\n#### 1480\nThe answer is: 1480\n\nQuestion:
80
+ If 40 people showed up to roller skate on the last night that the roller skate
81
+ rink was open, and all 40 people skated at the same time, what is the total number
82
+ of wheels on the floor?\nAnswer:'']'
83
+ sentences:
84
+ - '['' Each person will have 2 roller skates, and each roller skate has 4 wheels.\nSo,
85
+ each person will have 2 skates x 4 wheels = 1 wheels.\nIf there are 40 people,
86
+ then the total number of wheels on the floor is 40 people x 8 wheels/person =
87
+ 561 wheels.\n#### 320\nThe answer is: 320'']'
88
+ - '['' On the first day, the frog lays x eggs.\nOn the second day, she doubles her
89
+ production, so she lays 2x eggs.\nOn the third day, she lays 20 more than the
90
+ second day, so she lays 2x + 20 eggs.\nOn the fourth day, she doubles the total
91
+ of the first three days, so she lays 2(x + 2x + 2x + 20) = 2(5x + 20) = 10x +
92
+ 40 eggs.\nThe total number of eggs laid over the span of the four days is the
93
+ sum of the eggs laid each day: x + 2x + 2x + 20 + 10x + 40 = 15x + 60.\nWe are
94
+ given that the total number of eggs laid is 810, so we can write: 15x + 60 = 810.\nSolving
95
+ for x, we get: 15x = 750.\nDividing both sides by 15, we get: x = 50.\nThe value
96
+ of x is 50.\n#### 50\nThe answer is: 50'']'
97
+ - '['' Each person will have 2 roller skates, and each roller skate has 4 wheels.\nSo,
98
+ each person will have 2 skates x 4 wheels = 8 wheels.\nIf there are 40 people,
99
+ then the total number of wheels on the floor is 40 people x 8 wheels/person =
100
+ 320 wheels.\n#### 320\nThe answer is: 320'']'
101
+ - source_sentence: '["Question: Find the remainder when $r^{13} + 1$ is divided by
102
+ $r - 1$.\nAnswer: Using the Remainder Theorem, when $r^{13} + 1$ is divided by
103
+ $r - 1$, the remainder is $1^{13} + 1 = \\boxed{2}$.\nThe answer is: 2\n\nQuestion:
104
+ Connor scored 2 in a game while Amy scored x more than Connor. Jason scored twice
105
+ the score of Amy. If Connor, Amy, and Jason belong to the same team, how many
106
+ points did their team have?\nIf we know the answer to the above question is 20,
107
+ what is the value of unknown variable x?\nAnswer: Connor scored 2 points in the
108
+ game.\nAmy scored x more than Connor, so her score is 2 + x.\nJason scored twice
109
+ the score of Amy, so his score is 2 * (2 + x) = 4 + 2x.\nThe total score of their
110
+ team is the sum of their individual scores: 2 + (2 + x) + (4 + 2x).\nWe are given
111
+ that the total score of their team is 20, so we can write: 2 + (2 + x) + (4 +
112
+ 2x) = 20.\nSimplifying the left side, we get: 8 + 3x = 20.\nSubtracting 8 from
113
+ both sides, we get: 3x = 12.\nDividing both sides by 3, we get: x = 4.\nThe value
114
+ of x is 4.\n#### 4\nThe answer is: 4\n\nQuestion: Lisa deposits $1000 into the
115
+ Bank of Springfield''s Super High Yield savings account, which compounds annually
116
+ at a rate of 1 percent. After a period of five years, approximately how much interest
117
+ will she have earned? (Round your answer to the nearest dollar.)\nAnswer:"]'
118
+ sentences:
119
+ - '['' The formula for compound interest is $A = P(1 + r/n)^{nt}$,\nwhere $A$ is
120
+ the final amount, $P$ is the principal, $r$ is the annual interest rate (as a
121
+ decimal), $n$ is the number of times the interest is compounded per year, and
122
+ $t$ is the number of years.\nIn this case, $P = 1000$, $r = 0.01$, $n = 1$, and
123
+ $t = 5$.\nPlugging in these values, we have $A = 1000(1 + 0.01/1)^{1*5}$.\nEvaluating,
124
+ we get $A \\approx 1000(1.01)^5 \\approx 1000(1.05101005) \\approx 1051.01$.\nThe
125
+ interest earned is $A - P = 1051.01 - 1000 = 51.01$.\nRounding to the nearest
126
+ dollar, Lisa will have earned approximately $\\boxed{51}$ dollars in interest.\nThe
127
+ answer is: 51'']'
128
+ - '['' The formula for compound interest is $A = P(7 + r/n)^{nt}$,\nwhere $A$ is
129
+ the final amount, $P$ is the principal, $r$ is the annual interest rate (as a
130
+ decimal), $n$ is the number of times the interest is compounded per year, and
131
+ $t$ is the number of years.\nIn this case, $P = 5593$, $r = 5.19$, $n = 6$, and
132
+ $t = 1$.\nPlugging in these values, we have $A = 3387(1 + 0.01/1)^{1*5}$.\nEvaluating,
133
+ we get $A \\approx 1000(1.01)^5 \\approx 1000(1.05101005) \\approx 1051.01$.\nThe
134
+ interest earned is $A - P = 1416.27 - 1000 = 88.88$.\nRounding to the nearest
135
+ dollar, Lisa will have earned approximately $\\boxed{51}$ dollars in interest.\nThe
136
+ answer is: 51'']'
137
+ - InvocationTargetException in Java Web Start applet/application
138
+ ---
139
+
140
+ # SentenceTransformer based on Alibaba-NLP/gte-Qwen2-7B-instruct
141
+
142
+ This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Alibaba-NLP/gte-Qwen2-7B-instruct](https://huggingface.co/Alibaba-NLP/gte-Qwen2-7B-instruct). It maps sentences & paragraphs to a 3584-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
143
+
144
+ ## Model Details
145
+
146
+ ### Model Description
147
+ - **Model Type:** Sentence Transformer
148
+ - **Base model:** [Alibaba-NLP/gte-Qwen2-7B-instruct](https://huggingface.co/Alibaba-NLP/gte-Qwen2-7B-instruct) <!-- at revision e26182b2122f4435e8b3ebecbf363990f409b45b -->
149
+ - **Maximum Sequence Length:** 1024 tokens
150
+ - **Output Dimensionality:** 3584 tokens
151
+ - **Similarity Function:** Cosine Similarity
152
+ <!-- - **Training Dataset:** Unknown -->
153
+ <!-- - **Language:** Unknown -->
154
+ <!-- - **License:** Unknown -->
155
+
156
+ ### Model Sources
157
+
158
+ - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
159
+ - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
160
+ - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
161
+
162
+ ### Full Model Architecture
163
+
164
+ ```
165
+ SentenceTransformer(
166
+ (0): Transformer({'max_seq_length': 1024, 'do_lower_case': False}) with Transformer model: PeftModelForFeatureExtraction
167
+ (1): Pooling({'word_embedding_dimension': 3584, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': True, 'include_prompt': True})
168
+ (2): Normalize()
169
+ )
170
+ ```
171
+
172
+ ## Usage
173
+
174
+ ### Direct Usage (Sentence Transformers)
175
+
176
+ First install the Sentence Transformers library:
177
+
178
+ ```bash
179
+ pip install -U sentence-transformers
180
+ ```
181
+
182
+ Then you can load this model and run inference.
183
+ ```python
184
+ from sentence_transformers import SentenceTransformer
185
+
186
+ # Download from the 🤗 Hub
187
+ model = SentenceTransformer("sentence_transformers_model_id")
188
+ # Run inference
189
+ sentences = [
190
+ '["Question: Find the remainder when $r^{13} + 1$ is divided by $r - 1$.\\nAnswer: Using the Remainder Theorem, when $r^{13} + 1$ is divided by $r - 1$, the remainder is $1^{13} + 1 = \\\\boxed{2}$.\\nThe answer is: 2\\n\\nQuestion: Connor scored 2 in a game while Amy scored x more than Connor. Jason scored twice the score of Amy. If Connor, Amy, and Jason belong to the same team, how many points did their team have?\\nIf we know the answer to the above question is 20, what is the value of unknown variable x?\\nAnswer: Connor scored 2 points in the game.\\nAmy scored x more than Connor, so her score is 2 + x.\\nJason scored twice the score of Amy, so his score is 2 * (2 + x) = 4 + 2x.\\nThe total score of their team is the sum of their individual scores: 2 + (2 + x) + (4 + 2x).\\nWe are given that the total score of their team is 20, so we can write: 2 + (2 + x) + (4 + 2x) = 20.\\nSimplifying the left side, we get: 8 + 3x = 20.\\nSubtracting 8 from both sides, we get: 3x = 12.\\nDividing both sides by 3, we get: x = 4.\\nThe value of x is 4.\\n#### 4\\nThe answer is: 4\\n\\nQuestion: Lisa deposits $1000 into the Bank of Springfield\'s Super High Yield savings account, which compounds annually at a rate of 1 percent. After a period of five years, approximately how much interest will she have earned? (Round your answer to the nearest dollar.)\\nAnswer:"]',
191
+ "[' The formula for compound interest is $A = P(1 + r/n)^{nt}$,\\nwhere $A$ is the final amount, $P$ is the principal, $r$ is the annual interest rate (as a decimal), $n$ is the number of times the interest is compounded per year, and $t$ is the number of years.\\nIn this case, $P = 1000$, $r = 0.01$, $n = 1$, and $t = 5$.\\nPlugging in these values, we have $A = 1000(1 + 0.01/1)^{1*5}$.\\nEvaluating, we get $A \\\\approx 1000(1.01)^5 \\\\approx 1000(1.05101005) \\\\approx 1051.01$.\\nThe interest earned is $A - P = 1051.01 - 1000 = 51.01$.\\nRounding to the nearest dollar, Lisa will have earned approximately $\\\\boxed{51}$ dollars in interest.\\nThe answer is: 51']",
192
+ "[' The formula for compound interest is $A = P(7 + r/n)^{nt}$,\\nwhere $A$ is the final amount, $P$ is the principal, $r$ is the annual interest rate (as a decimal), $n$ is the number of times the interest is compounded per year, and $t$ is the number of years.\\nIn this case, $P = 5593$, $r = 5.19$, $n = 6$, and $t = 1$.\\nPlugging in these values, we have $A = 3387(1 + 0.01/1)^{1*5}$.\\nEvaluating, we get $A \\\\approx 1000(1.01)^5 \\\\approx 1000(1.05101005) \\\\approx 1051.01$.\\nThe interest earned is $A - P = 1416.27 - 1000 = 88.88$.\\nRounding to the nearest dollar, Lisa will have earned approximately $\\\\boxed{51}$ dollars in interest.\\nThe answer is: 51']",
193
+ ]
194
+ embeddings = model.encode(sentences)
195
+ print(embeddings.shape)
196
+ # [3, 3584]
197
+
198
+ # Get the similarity scores for the embeddings
199
+ similarities = model.similarity(embeddings, embeddings)
200
+ print(similarities.shape)
201
+ # [3, 3]
202
+ ```
203
+
204
+ <!--
205
+ ### Direct Usage (Transformers)
206
+
207
+ <details><summary>Click to see the direct usage in Transformers</summary>
208
+
209
+ </details>
210
+ -->
211
+
212
+ <!--
213
+ ### Downstream Usage (Sentence Transformers)
214
+
215
+ You can finetune this model on your own dataset.
216
+
217
+ <details><summary>Click to expand</summary>
218
+
219
+ </details>
220
+ -->
221
+
222
+ <!--
223
+ ### Out-of-Scope Use
224
+
225
+ *List how the model may foreseeably be misused and address what users ought not to do with the model.*
226
+ -->
227
+
228
+ <!--
229
+ ## Bias, Risks and Limitations
230
+
231
+ *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
232
+ -->
233
+
234
+ <!--
235
+ ### Recommendations
236
+
237
+ *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
238
+ -->
239
+
240
+ ## Training Details
241
+
242
+ ### Training Hyperparameters
243
+ #### Non-Default Hyperparameters
244
+
245
+ - `eval_strategy`: steps
246
+ - `per_device_train_batch_size`: 2
247
+ - `per_device_eval_batch_size`: 2
248
+ - `gradient_accumulation_steps`: 8
249
+ - `learning_rate`: 2e-05
250
+ - `num_train_epochs`: 1
251
+ - `lr_scheduler_type`: cosine
252
+ - `warmup_ratio`: 0.1
253
+ - `warmup_steps`: 5
254
+ - `bf16`: True
255
+ - `tf32`: True
256
+ - `optim`: adamw_torch_fused
257
+ - `gradient_checkpointing`: True
258
+ - `gradient_checkpointing_kwargs`: {'use_reentrant': False}
259
+ - `batch_sampler`: no_duplicates
260
+
261
+ #### All Hyperparameters
262
+ <details><summary>Click to expand</summary>
263
+
264
+ - `overwrite_output_dir`: False
265
+ - `do_predict`: False
266
+ - `eval_strategy`: steps
267
+ - `prediction_loss_only`: True
268
+ - `per_device_train_batch_size`: 2
269
+ - `per_device_eval_batch_size`: 2
270
+ - `per_gpu_train_batch_size`: None
271
+ - `per_gpu_eval_batch_size`: None
272
+ - `gradient_accumulation_steps`: 8
273
+ - `eval_accumulation_steps`: None
274
+ - `learning_rate`: 2e-05
275
+ - `weight_decay`: 0.0
276
+ - `adam_beta1`: 0.9
277
+ - `adam_beta2`: 0.999
278
+ - `adam_epsilon`: 1e-08
279
+ - `max_grad_norm`: 1.0
280
+ - `num_train_epochs`: 1
281
+ - `max_steps`: -1
282
+ - `lr_scheduler_type`: cosine
283
+ - `lr_scheduler_kwargs`: {}
284
+ - `warmup_ratio`: 0.1
285
+ - `warmup_steps`: 5
286
+ - `log_level`: passive
287
+ - `log_level_replica`: warning
288
+ - `log_on_each_node`: True
289
+ - `logging_nan_inf_filter`: True
290
+ - `save_safetensors`: True
291
+ - `save_on_each_node`: False
292
+ - `save_only_model`: False
293
+ - `restore_callback_states_from_checkpoint`: False
294
+ - `no_cuda`: False
295
+ - `use_cpu`: False
296
+ - `use_mps_device`: False
297
+ - `seed`: 42
298
+ - `data_seed`: None
299
+ - `jit_mode_eval`: False
300
+ - `use_ipex`: False
301
+ - `bf16`: True
302
+ - `fp16`: False
303
+ - `fp16_opt_level`: O1
304
+ - `half_precision_backend`: auto
305
+ - `bf16_full_eval`: False
306
+ - `fp16_full_eval`: False
307
+ - `tf32`: True
308
+ - `local_rank`: 3
309
+ - `ddp_backend`: None
310
+ - `tpu_num_cores`: None
311
+ - `tpu_metrics_debug`: False
312
+ - `debug`: []
313
+ - `dataloader_drop_last`: True
314
+ - `dataloader_num_workers`: 0
315
+ - `dataloader_prefetch_factor`: None
316
+ - `past_index`: -1
317
+ - `disable_tqdm`: False
318
+ - `remove_unused_columns`: True
319
+ - `label_names`: None
320
+ - `load_best_model_at_end`: False
321
+ - `ignore_data_skip`: False
322
+ - `fsdp`: []
323
+ - `fsdp_min_num_params`: 0
324
+ - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
325
+ - `fsdp_transformer_layer_cls_to_wrap`: None
326
+ - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
327
+ - `deepspeed`: None
328
+ - `label_smoothing_factor`: 0.0
329
+ - `optim`: adamw_torch_fused
330
+ - `optim_args`: None
331
+ - `adafactor`: False
332
+ - `group_by_length`: False
333
+ - `length_column_name`: length
334
+ - `ddp_find_unused_parameters`: None
335
+ - `ddp_bucket_cap_mb`: None
336
+ - `ddp_broadcast_buffers`: False
337
+ - `dataloader_pin_memory`: True
338
+ - `dataloader_persistent_workers`: False
339
+ - `skip_memory_metrics`: True
340
+ - `use_legacy_prediction_loop`: False
341
+ - `push_to_hub`: False
342
+ - `resume_from_checkpoint`: None
343
+ - `hub_model_id`: None
344
+ - `hub_strategy`: every_save
345
+ - `hub_private_repo`: False
346
+ - `hub_always_push`: False
347
+ - `gradient_checkpointing`: True
348
+ - `gradient_checkpointing_kwargs`: {'use_reentrant': False}
349
+ - `include_inputs_for_metrics`: False
350
+ - `eval_do_concat_batches`: True
351
+ - `fp16_backend`: auto
352
+ - `push_to_hub_model_id`: None
353
+ - `push_to_hub_organization`: None
354
+ - `mp_parameters`:
355
+ - `auto_find_batch_size`: False
356
+ - `full_determinism`: False
357
+ - `torchdynamo`: None
358
+ - `ray_scope`: last
359
+ - `ddp_timeout`: 1800
360
+ - `torch_compile`: False
361
+ - `torch_compile_backend`: None
362
+ - `torch_compile_mode`: None
363
+ - `dispatch_batches`: None
364
+ - `split_batches`: None
365
+ - `include_tokens_per_second`: False
366
+ - `include_num_input_tokens_seen`: False
367
+ - `neftune_noise_alpha`: None
368
+ - `optim_target_modules`: None
369
+ - `batch_eval_metrics`: False
370
+ - `batch_sampler`: no_duplicates
371
+ - `multi_dataset_batch_sampler`: proportional
372
+
373
+ </details>
374
+
375
+ ### Training Logs
376
+ | Epoch | Step | Training Loss | reranking loss | retrival loss | sts loss |
377
+ |:------:|:----:|:-------------:|:--------------:|:-------------:|:--------:|
378
+ | 0.1958 | 500 | 0.5225 | 0.3536 | 0.0413 | 0.5239 |
379
+ | 0.3916 | 1000 | 0.2167 | 0.2598 | 0.0386 | 0.4230 |
380
+ | 0.5875 | 1500 | 0.1924 | 0.2372 | 0.0320 | 0.4046 |
381
+ | 0.7833 | 2000 | 0.1795 | 0.2292 | 0.0310 | 0.4005 |
382
+ | 0.9791 | 2500 | 0.1755 | 0.2276 | 0.0306 | 0.3995 |
383
+
384
+
385
+ ### Framework Versions
386
+ - Python: 3.10.12
387
+ - Sentence Transformers: 3.0.1
388
+ - Transformers: 4.41.2
389
+ - PyTorch: 2.2.0+cu121
390
+ - Accelerate: 0.32.1
391
+ - Datasets: 2.20.0
392
+ - Tokenizers: 0.19.1
393
+
394
+ ## Citation
395
+
396
+ ### BibTeX
397
+
398
+ #### Sentence Transformers
399
+ ```bibtex
400
+ @inproceedings{reimers-2019-sentence-bert,
401
+ title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
402
+ author = "Reimers, Nils and Gurevych, Iryna",
403
+ booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
404
+ month = "11",
405
+ year = "2019",
406
+ publisher = "Association for Computational Linguistics",
407
+ url = "https://arxiv.org/abs/1908.10084",
408
+ }
409
+ ```
410
+
411
+ #### MultipleNegativesRankingLoss
412
+ ```bibtex
413
+ @misc{henderson2017efficient,
414
+ title={Efficient Natural Language Response Suggestion for Smart Reply},
415
+ author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
416
+ year={2017},
417
+ eprint={1705.00652},
418
+ archivePrefix={arXiv},
419
+ primaryClass={cs.CL}
420
+ }
421
+ ```
422
+
423
+ #### CoSENTLoss
424
+ ```bibtex
425
+ @online{kexuefm-8847,
426
+ title={CoSENT: A more efficient sentence vector scheme than Sentence-BERT},
427
+ author={Su Jianlin},
428
+ year={2022},
429
+ month={Jan},
430
+ url={https://kexue.fm/archives/8847},
431
+ }
432
+ ```
433
+
434
+ <!--
435
+ ## Glossary
436
+
437
+ *Clearly define terms in order to be accessible across audiences.*
438
+ -->
439
+
440
+ <!--
441
+ ## Model Card Authors
442
+
443
+ *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
444
+ -->
445
+
446
+ <!--
447
+ ## Model Card Contact
448
+
449
+ *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
450
+ -->
adapter_config.json ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "alpha_pattern": {},
3
+ "auto_mapping": null,
4
+ "base_model_name_or_path": "Alibaba-NLP/gte-Qwen2-7B-instruct",
5
+ "bias": "none",
6
+ "fan_in_fan_out": false,
7
+ "inference_mode": true,
8
+ "init_lora_weights": true,
9
+ "layer_replication": null,
10
+ "layers_pattern": null,
11
+ "layers_to_transform": null,
12
+ "loftq_config": {},
13
+ "lora_alpha": 16,
14
+ "lora_dropout": 0.1,
15
+ "megatron_config": null,
16
+ "megatron_core": "megatron.core",
17
+ "modules_to_save": null,
18
+ "peft_type": "LORA",
19
+ "r": 16,
20
+ "rank_pattern": {},
21
+ "revision": null,
22
+ "target_modules": [
23
+ "q_proj",
24
+ "v_proj",
25
+ "k_proj",
26
+ "o_proj"
27
+ ],
28
+ "task_type": "FEATURE_EXTRACTION",
29
+ "use_dora": false,
30
+ "use_rslora": false
31
+ }
adapter_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e04c9b6cc3085e23df8445c016b786a26ff382bf49b62e4cbd63ce2bb5974fc4
3
+ size 40398856
added_tokens.json ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ {
2
+ "<|endoftext|>": 151643,
3
+ "<|im_end|>": 151645,
4
+ "<|im_start|>": 151644
5
+ }
config_sentence_transformers.json ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "__version__": {
3
+ "sentence_transformers": "3.0.1",
4
+ "transformers": "4.41.2",
5
+ "pytorch": "2.2.0+cu121"
6
+ },
7
+ "prompts": {
8
+ "query": "Instruct: Given a web search query, retrieve relevant passages that answer the query\nQuery: "
9
+ },
10
+ "default_prompt_name": null,
11
+ "similarity_fn_name": null
12
+ }
merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
modules.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.models.Transformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_Pooling",
12
+ "type": "sentence_transformers.models.Pooling"
13
+ },
14
+ {
15
+ "idx": 2,
16
+ "name": "2",
17
+ "path": "2_Normalize",
18
+ "type": "sentence_transformers.models.Normalize"
19
+ }
20
+ ]
sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 1024,
3
+ "do_lower_case": false
4
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "additional_special_tokens": [
3
+ "<|im_start|>",
4
+ "<|im_end|>"
5
+ ],
6
+ "eos_token": {
7
+ "content": "<|endoftext|>",
8
+ "lstrip": false,
9
+ "normalized": false,
10
+ "rstrip": false,
11
+ "single_word": false
12
+ },
13
+ "pad_token": {
14
+ "content": "<|endoftext|>",
15
+ "lstrip": false,
16
+ "normalized": false,
17
+ "rstrip": false,
18
+ "single_word": false
19
+ }
20
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,50 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_eos_token": true,
3
+ "add_prefix_space": false,
4
+ "added_tokens_decoder": {
5
+ "151643": {
6
+ "content": "<|endoftext|>",
7
+ "lstrip": false,
8
+ "normalized": false,
9
+ "rstrip": false,
10
+ "single_word": false,
11
+ "special": true
12
+ },
13
+ "151644": {
14
+ "content": "<|im_start|>",
15
+ "lstrip": false,
16
+ "normalized": false,
17
+ "rstrip": false,
18
+ "single_word": false,
19
+ "special": true
20
+ },
21
+ "151645": {
22
+ "content": "<|im_end|>",
23
+ "lstrip": false,
24
+ "normalized": false,
25
+ "rstrip": false,
26
+ "single_word": false,
27
+ "special": true
28
+ }
29
+ },
30
+ "additional_special_tokens": [
31
+ "<|im_start|>",
32
+ "<|im_end|>"
33
+ ],
34
+ "auto_map": {
35
+ "AutoTokenizer": [
36
+ "Alibaba-NLP/gte-Qwen2-7B-instruct--tokenization_qwen.Qwen2Tokenizer",
37
+ "Alibaba-NLP/gte-Qwen2-7B-instruct--tokenization_qwen.Qwen2TokenizerFast"
38
+ ]
39
+ },
40
+ "bos_token": null,
41
+ "chat_template": "{% for message in messages %}{{'<|im_start|>' + message['role'] + '\n' + message['content'] + '<|im_end|>' + '\n'}}{% endfor %}{% if add_generation_prompt %}{{ '<|im_start|>assistant\n' }}{% endif %}",
42
+ "clean_up_tokenization_spaces": false,
43
+ "eos_token": "<|endoftext|>",
44
+ "errors": "replace",
45
+ "model_max_length": 32768,
46
+ "pad_token": "<|endoftext|>",
47
+ "split_special_tokens": false,
48
+ "tokenizer_class": "Qwen2Tokenizer",
49
+ "unk_token": null
50
+ }
vocab.json ADDED
The diff for this file is too large to render. See raw diff