kr-manish commited on
Commit
32d8a9b
1 Parent(s): 19c37f9

Add new SentenceTransformer model.

Browse files
1_Pooling/config.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "word_embedding_dimension": 768,
3
+ "pooling_mode_cls_token": true,
4
+ "pooling_mode_mean_tokens": false,
5
+ "pooling_mode_max_tokens": false,
6
+ "pooling_mode_mean_sqrt_len_tokens": false,
7
+ "pooling_mode_weightedmean_tokens": false,
8
+ "pooling_mode_lasttoken": false,
9
+ "include_prompt": true
10
+ }
README.md ADDED
@@ -0,0 +1,819 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: BAAI/bge-base-en-v1.5
3
+ datasets: []
4
+ language: []
5
+ library_name: sentence-transformers
6
+ metrics:
7
+ - cosine_accuracy@1
8
+ - cosine_accuracy@3
9
+ - cosine_accuracy@5
10
+ - cosine_accuracy@10
11
+ - cosine_precision@1
12
+ - cosine_precision@3
13
+ - cosine_precision@5
14
+ - cosine_precision@10
15
+ - cosine_recall@1
16
+ - cosine_recall@3
17
+ - cosine_recall@5
18
+ - cosine_recall@10
19
+ - cosine_ndcg@10
20
+ - cosine_mrr@10
21
+ - cosine_map@100
22
+ pipeline_tag: sentence-similarity
23
+ tags:
24
+ - sentence-transformers
25
+ - sentence-similarity
26
+ - feature-extraction
27
+ - generated_from_trainer
28
+ - dataset_size:160
29
+ - loss:MatryoshkaLoss
30
+ - loss:MultipleNegativesRankingLoss
31
+ widget:
32
+ - source_sentence: Priya Softweb emphasizes the importance of maintaining a clean
33
+ and organized workspace. The company's HR policies clearly state that employees
34
+ are responsible for keeping their assigned workspaces clean, orderly, and free
35
+ from unnecessary items. Spitting tobacco, gum, or other substances in the washrooms
36
+ is strictly prohibited. The company believes that a clean and organized work environment
37
+ contributes to a more efficient and professional work experience for everyone.
38
+ This emphasis on cleanliness reflects the company's commitment to creating a pleasant
39
+ and hygienic workspace for its employees.
40
+ sentences:
41
+ - What is Priya Softweb's policy on the use of mobile phones during work hours?
42
+ - What steps does Priya Softweb take to ensure that the workspace is clean and organized?
43
+ - What are the repercussions for employees who violate the Non-Disclosure Agreement
44
+ at Priya Softweb?
45
+ - source_sentence: Priya Softweb provides allocated basement parking facilities for
46
+ employees to park their two-wheelers and four-wheelers. However, parking on the
47
+ ground floor, around the lawn or main premises, is strictly prohibited as this
48
+ space is reserved for Directors. Employees should use the parking under wings
49
+ 5 and 6, while other parking spaces are allocated to different wings. Parking
50
+ two-wheelers in the car parking zone is not permitted, even if space is available.
51
+ Two-wheelers should be parked in the designated basement space on the main stand,
52
+ not on the side stand. Employees are encouraged to park in common spaces on a
53
+ first-come, first-served basis. The company clarifies that it is not responsible
54
+ for providing parking and that employees park their vehicles at their own risk.
55
+ This comprehensive parking policy ensures organized parking arrangements and clarifies
56
+ the company's liability regarding vehicle safety.
57
+ sentences:
58
+ - What is the application process for planned leaves at Priya Softweb?
59
+ - What are the parking arrangements at Priya Softweb?
60
+ - What is the process for reporting a security breach at Priya Softweb?
61
+ - source_sentence: The Diwali bonus at Priya Softweb is a discretionary benefit linked
62
+ to the company's business performance. Distributed during the festive season of
63
+ Diwali, it serves as a gesture of appreciation for employees' contributions throughout
64
+ the year. However, it's important to note that employees currently under the notice
65
+ period are not eligible for this bonus. This distinction highlights that the bonus
66
+ is intended to reward ongoing commitment and contribution to the company's success.
67
+ sentences:
68
+ - What steps does Priya Softweb take to promote responsible use of company resources?
69
+ - How does Priya Softweb demonstrate its commitment to Diversity, Equity, and Inclusion
70
+ (DEI)?
71
+ - What is the significance of the company's Diwali bonus at Priya Softweb?
72
+ - source_sentence: Priya Softweb's HR Manual paints a picture of a company that values
73
+ its employees while upholding a strong sense of professionalism and ethical conduct.
74
+ The company emphasizes a structured and transparent approach to its HR processes,
75
+ ensuring clarity and fairness in areas like recruitment, performance appraisals,
76
+ compensation, leave management, work-from-home arrangements, and incident reporting.
77
+ The manual highlights the importance of compliance with company policies, promotes
78
+ diversity and inclusion, and encourages a culture of continuous learning and development.
79
+ Overall, the message conveyed is one of creating a supportive, respectful, and
80
+ growth-oriented work environment for all employees.
81
+ sentences:
82
+ - What is the overall message conveyed by Priya Softweb's HR Manual?
83
+ - What is the process for reporting employee misconduct at Priya Softweb?
84
+ - What is Priya Softweb's policy on salary disbursement and payslips?
85
+ - source_sentence: No, work-from-home arrangements do not affect an employee's employment
86
+ terms, compensation, and benefits at Priya Softweb. This clarifies that work-from-home
87
+ is a flexible work arrangement and does not impact the employee's overall employment
88
+ status or benefits.
89
+ sentences:
90
+ - Do work-from-home arrangements affect compensation and benefits at Priya Softweb?
91
+ - What is the objective of the Work From Home Policy at Priya Softweb?
92
+ - What is the procedure for a new employee joining Priya Softweb?
93
+ model-index:
94
+ - name: SentenceTransformer based on BAAI/bge-base-en-v1.5
95
+ results:
96
+ - task:
97
+ type: information-retrieval
98
+ name: Information Retrieval
99
+ dataset:
100
+ name: dim 768
101
+ type: dim_768
102
+ metrics:
103
+ - type: cosine_accuracy@1
104
+ value: 0.6111111111111112
105
+ name: Cosine Accuracy@1
106
+ - type: cosine_accuracy@3
107
+ value: 0.7777777777777778
108
+ name: Cosine Accuracy@3
109
+ - type: cosine_accuracy@5
110
+ value: 0.7777777777777778
111
+ name: Cosine Accuracy@5
112
+ - type: cosine_accuracy@10
113
+ value: 0.8333333333333334
114
+ name: Cosine Accuracy@10
115
+ - type: cosine_precision@1
116
+ value: 0.6111111111111112
117
+ name: Cosine Precision@1
118
+ - type: cosine_precision@3
119
+ value: 0.25925925925925924
120
+ name: Cosine Precision@3
121
+ - type: cosine_precision@5
122
+ value: 0.15555555555555559
123
+ name: Cosine Precision@5
124
+ - type: cosine_precision@10
125
+ value: 0.08333333333333334
126
+ name: Cosine Precision@10
127
+ - type: cosine_recall@1
128
+ value: 0.6111111111111112
129
+ name: Cosine Recall@1
130
+ - type: cosine_recall@3
131
+ value: 0.7777777777777778
132
+ name: Cosine Recall@3
133
+ - type: cosine_recall@5
134
+ value: 0.7777777777777778
135
+ name: Cosine Recall@5
136
+ - type: cosine_recall@10
137
+ value: 0.8333333333333334
138
+ name: Cosine Recall@10
139
+ - type: cosine_ndcg@10
140
+ value: 0.7192441461309548
141
+ name: Cosine Ndcg@10
142
+ - type: cosine_mrr@10
143
+ value: 0.6828703703703703
144
+ name: Cosine Mrr@10
145
+ - type: cosine_map@100
146
+ value: 0.6895641882483987
147
+ name: Cosine Map@100
148
+ - task:
149
+ type: information-retrieval
150
+ name: Information Retrieval
151
+ dataset:
152
+ name: dim 512
153
+ type: dim_512
154
+ metrics:
155
+ - type: cosine_accuracy@1
156
+ value: 0.5555555555555556
157
+ name: Cosine Accuracy@1
158
+ - type: cosine_accuracy@3
159
+ value: 0.7777777777777778
160
+ name: Cosine Accuracy@3
161
+ - type: cosine_accuracy@5
162
+ value: 0.7777777777777778
163
+ name: Cosine Accuracy@5
164
+ - type: cosine_accuracy@10
165
+ value: 0.8333333333333334
166
+ name: Cosine Accuracy@10
167
+ - type: cosine_precision@1
168
+ value: 0.5555555555555556
169
+ name: Cosine Precision@1
170
+ - type: cosine_precision@3
171
+ value: 0.25925925925925924
172
+ name: Cosine Precision@3
173
+ - type: cosine_precision@5
174
+ value: 0.15555555555555559
175
+ name: Cosine Precision@5
176
+ - type: cosine_precision@10
177
+ value: 0.08333333333333334
178
+ name: Cosine Precision@10
179
+ - type: cosine_recall@1
180
+ value: 0.5555555555555556
181
+ name: Cosine Recall@1
182
+ - type: cosine_recall@3
183
+ value: 0.7777777777777778
184
+ name: Cosine Recall@3
185
+ - type: cosine_recall@5
186
+ value: 0.7777777777777778
187
+ name: Cosine Recall@5
188
+ - type: cosine_recall@10
189
+ value: 0.8333333333333334
190
+ name: Cosine Recall@10
191
+ - type: cosine_ndcg@10
192
+ value: 0.6972735740811556
193
+ name: Cosine Ndcg@10
194
+ - type: cosine_mrr@10
195
+ value: 0.6537037037037037
196
+ name: Cosine Mrr@10
197
+ - type: cosine_map@100
198
+ value: 0.6594551282051282
199
+ name: Cosine Map@100
200
+ - task:
201
+ type: information-retrieval
202
+ name: Information Retrieval
203
+ dataset:
204
+ name: dim 256
205
+ type: dim_256
206
+ metrics:
207
+ - type: cosine_accuracy@1
208
+ value: 0.4444444444444444
209
+ name: Cosine Accuracy@1
210
+ - type: cosine_accuracy@3
211
+ value: 0.6666666666666666
212
+ name: Cosine Accuracy@3
213
+ - type: cosine_accuracy@5
214
+ value: 0.7777777777777778
215
+ name: Cosine Accuracy@5
216
+ - type: cosine_accuracy@10
217
+ value: 0.8888888888888888
218
+ name: Cosine Accuracy@10
219
+ - type: cosine_precision@1
220
+ value: 0.4444444444444444
221
+ name: Cosine Precision@1
222
+ - type: cosine_precision@3
223
+ value: 0.2222222222222222
224
+ name: Cosine Precision@3
225
+ - type: cosine_precision@5
226
+ value: 0.15555555555555559
227
+ name: Cosine Precision@5
228
+ - type: cosine_precision@10
229
+ value: 0.0888888888888889
230
+ name: Cosine Precision@10
231
+ - type: cosine_recall@1
232
+ value: 0.4444444444444444
233
+ name: Cosine Recall@1
234
+ - type: cosine_recall@3
235
+ value: 0.6666666666666666
236
+ name: Cosine Recall@3
237
+ - type: cosine_recall@5
238
+ value: 0.7777777777777778
239
+ name: Cosine Recall@5
240
+ - type: cosine_recall@10
241
+ value: 0.8888888888888888
242
+ name: Cosine Recall@10
243
+ - type: cosine_ndcg@10
244
+ value: 0.6562432565194594
245
+ name: Cosine Ndcg@10
246
+ - type: cosine_mrr@10
247
+ value: 0.5836419753086418
248
+ name: Cosine Mrr@10
249
+ - type: cosine_map@100
250
+ value: 0.5862843837990037
251
+ name: Cosine Map@100
252
+ - task:
253
+ type: information-retrieval
254
+ name: Information Retrieval
255
+ dataset:
256
+ name: dim 128
257
+ type: dim_128
258
+ metrics:
259
+ - type: cosine_accuracy@1
260
+ value: 0.4444444444444444
261
+ name: Cosine Accuracy@1
262
+ - type: cosine_accuracy@3
263
+ value: 0.6666666666666666
264
+ name: Cosine Accuracy@3
265
+ - type: cosine_accuracy@5
266
+ value: 0.7222222222222222
267
+ name: Cosine Accuracy@5
268
+ - type: cosine_accuracy@10
269
+ value: 0.7777777777777778
270
+ name: Cosine Accuracy@10
271
+ - type: cosine_precision@1
272
+ value: 0.4444444444444444
273
+ name: Cosine Precision@1
274
+ - type: cosine_precision@3
275
+ value: 0.2222222222222222
276
+ name: Cosine Precision@3
277
+ - type: cosine_precision@5
278
+ value: 0.1444444444444445
279
+ name: Cosine Precision@5
280
+ - type: cosine_precision@10
281
+ value: 0.07777777777777779
282
+ name: Cosine Precision@10
283
+ - type: cosine_recall@1
284
+ value: 0.4444444444444444
285
+ name: Cosine Recall@1
286
+ - type: cosine_recall@3
287
+ value: 0.6666666666666666
288
+ name: Cosine Recall@3
289
+ - type: cosine_recall@5
290
+ value: 0.7222222222222222
291
+ name: Cosine Recall@5
292
+ - type: cosine_recall@10
293
+ value: 0.7777777777777778
294
+ name: Cosine Recall@10
295
+ - type: cosine_ndcg@10
296
+ value: 0.6173875222934583
297
+ name: Cosine Ndcg@10
298
+ - type: cosine_mrr@10
299
+ value: 0.5653439153439153
300
+ name: Cosine Mrr@10
301
+ - type: cosine_map@100
302
+ value: 0.5728811234914597
303
+ name: Cosine Map@100
304
+ - task:
305
+ type: information-retrieval
306
+ name: Information Retrieval
307
+ dataset:
308
+ name: dim 64
309
+ type: dim_64
310
+ metrics:
311
+ - type: cosine_accuracy@1
312
+ value: 0.3888888888888889
313
+ name: Cosine Accuracy@1
314
+ - type: cosine_accuracy@3
315
+ value: 0.6111111111111112
316
+ name: Cosine Accuracy@3
317
+ - type: cosine_accuracy@5
318
+ value: 0.6666666666666666
319
+ name: Cosine Accuracy@5
320
+ - type: cosine_accuracy@10
321
+ value: 0.7777777777777778
322
+ name: Cosine Accuracy@10
323
+ - type: cosine_precision@1
324
+ value: 0.3888888888888889
325
+ name: Cosine Precision@1
326
+ - type: cosine_precision@3
327
+ value: 0.2037037037037037
328
+ name: Cosine Precision@3
329
+ - type: cosine_precision@5
330
+ value: 0.13333333333333336
331
+ name: Cosine Precision@5
332
+ - type: cosine_precision@10
333
+ value: 0.07777777777777779
334
+ name: Cosine Precision@10
335
+ - type: cosine_recall@1
336
+ value: 0.3888888888888889
337
+ name: Cosine Recall@1
338
+ - type: cosine_recall@3
339
+ value: 0.6111111111111112
340
+ name: Cosine Recall@3
341
+ - type: cosine_recall@5
342
+ value: 0.6666666666666666
343
+ name: Cosine Recall@5
344
+ - type: cosine_recall@10
345
+ value: 0.7777777777777778
346
+ name: Cosine Recall@10
347
+ - type: cosine_ndcg@10
348
+ value: 0.5654500657830313
349
+ name: Cosine Ndcg@10
350
+ - type: cosine_mrr@10
351
+ value: 0.49922839506172845
352
+ name: Cosine Mrr@10
353
+ - type: cosine_map@100
354
+ value: 0.5078970140244651
355
+ name: Cosine Map@100
356
+ ---
357
+
358
+ # SentenceTransformer based on BAAI/bge-base-en-v1.5
359
+
360
+ This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
361
+
362
+ ## Model Details
363
+
364
+ ### Model Description
365
+ - **Model Type:** Sentence Transformer
366
+ - **Base model:** [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) <!-- at revision a5beb1e3e68b9ab74eb54cfd186867f64f240e1a -->
367
+ - **Maximum Sequence Length:** 512 tokens
368
+ - **Output Dimensionality:** 768 tokens
369
+ - **Similarity Function:** Cosine Similarity
370
+ <!-- - **Training Dataset:** Unknown -->
371
+ <!-- - **Language:** Unknown -->
372
+ <!-- - **License:** Unknown -->
373
+
374
+ ### Model Sources
375
+
376
+ - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
377
+ - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
378
+ - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
379
+
380
+ ### Full Model Architecture
381
+
382
+ ```
383
+ SentenceTransformer(
384
+ (0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
385
+ (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
386
+ (2): Normalize()
387
+ )
388
+ ```
389
+
390
+ ## Usage
391
+
392
+ ### Direct Usage (Sentence Transformers)
393
+
394
+ First install the Sentence Transformers library:
395
+
396
+ ```bash
397
+ pip install -U sentence-transformers
398
+ ```
399
+
400
+ Then you can load this model and run inference.
401
+ ```python
402
+ from sentence_transformers import SentenceTransformer
403
+
404
+ # Download from the 🤗 Hub
405
+ model = SentenceTransformer("kr-manish/bge-base-financial-matryoshka")
406
+ # Run inference
407
+ sentences = [
408
+ "No, work-from-home arrangements do not affect an employee's employment terms, compensation, and benefits at Priya Softweb. This clarifies that work-from-home is a flexible work arrangement and does not impact the employee's overall employment status or benefits.",
409
+ 'Do work-from-home arrangements affect compensation and benefits at Priya Softweb?',
410
+ 'What is the objective of the Work From Home Policy at Priya Softweb?',
411
+ ]
412
+ embeddings = model.encode(sentences)
413
+ print(embeddings.shape)
414
+ # [3, 768]
415
+
416
+ # Get the similarity scores for the embeddings
417
+ similarities = model.similarity(embeddings, embeddings)
418
+ print(similarities.shape)
419
+ # [3, 3]
420
+ ```
421
+
422
+ <!--
423
+ ### Direct Usage (Transformers)
424
+
425
+ <details><summary>Click to see the direct usage in Transformers</summary>
426
+
427
+ </details>
428
+ -->
429
+
430
+ <!--
431
+ ### Downstream Usage (Sentence Transformers)
432
+
433
+ You can finetune this model on your own dataset.
434
+
435
+ <details><summary>Click to expand</summary>
436
+
437
+ </details>
438
+ -->
439
+
440
+ <!--
441
+ ### Out-of-Scope Use
442
+
443
+ *List how the model may foreseeably be misused and address what users ought not to do with the model.*
444
+ -->
445
+
446
+ ## Evaluation
447
+
448
+ ### Metrics
449
+
450
+ #### Information Retrieval
451
+ * Dataset: `dim_768`
452
+ * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
453
+
454
+ | Metric | Value |
455
+ |:--------------------|:-----------|
456
+ | cosine_accuracy@1 | 0.6111 |
457
+ | cosine_accuracy@3 | 0.7778 |
458
+ | cosine_accuracy@5 | 0.7778 |
459
+ | cosine_accuracy@10 | 0.8333 |
460
+ | cosine_precision@1 | 0.6111 |
461
+ | cosine_precision@3 | 0.2593 |
462
+ | cosine_precision@5 | 0.1556 |
463
+ | cosine_precision@10 | 0.0833 |
464
+ | cosine_recall@1 | 0.6111 |
465
+ | cosine_recall@3 | 0.7778 |
466
+ | cosine_recall@5 | 0.7778 |
467
+ | cosine_recall@10 | 0.8333 |
468
+ | cosine_ndcg@10 | 0.7192 |
469
+ | cosine_mrr@10 | 0.6829 |
470
+ | **cosine_map@100** | **0.6896** |
471
+
472
+ #### Information Retrieval
473
+ * Dataset: `dim_512`
474
+ * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
475
+
476
+ | Metric | Value |
477
+ |:--------------------|:-----------|
478
+ | cosine_accuracy@1 | 0.5556 |
479
+ | cosine_accuracy@3 | 0.7778 |
480
+ | cosine_accuracy@5 | 0.7778 |
481
+ | cosine_accuracy@10 | 0.8333 |
482
+ | cosine_precision@1 | 0.5556 |
483
+ | cosine_precision@3 | 0.2593 |
484
+ | cosine_precision@5 | 0.1556 |
485
+ | cosine_precision@10 | 0.0833 |
486
+ | cosine_recall@1 | 0.5556 |
487
+ | cosine_recall@3 | 0.7778 |
488
+ | cosine_recall@5 | 0.7778 |
489
+ | cosine_recall@10 | 0.8333 |
490
+ | cosine_ndcg@10 | 0.6973 |
491
+ | cosine_mrr@10 | 0.6537 |
492
+ | **cosine_map@100** | **0.6595** |
493
+
494
+ #### Information Retrieval
495
+ * Dataset: `dim_256`
496
+ * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
497
+
498
+ | Metric | Value |
499
+ |:--------------------|:-----------|
500
+ | cosine_accuracy@1 | 0.4444 |
501
+ | cosine_accuracy@3 | 0.6667 |
502
+ | cosine_accuracy@5 | 0.7778 |
503
+ | cosine_accuracy@10 | 0.8889 |
504
+ | cosine_precision@1 | 0.4444 |
505
+ | cosine_precision@3 | 0.2222 |
506
+ | cosine_precision@5 | 0.1556 |
507
+ | cosine_precision@10 | 0.0889 |
508
+ | cosine_recall@1 | 0.4444 |
509
+ | cosine_recall@3 | 0.6667 |
510
+ | cosine_recall@5 | 0.7778 |
511
+ | cosine_recall@10 | 0.8889 |
512
+ | cosine_ndcg@10 | 0.6562 |
513
+ | cosine_mrr@10 | 0.5836 |
514
+ | **cosine_map@100** | **0.5863** |
515
+
516
+ #### Information Retrieval
517
+ * Dataset: `dim_128`
518
+ * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
519
+
520
+ | Metric | Value |
521
+ |:--------------------|:-----------|
522
+ | cosine_accuracy@1 | 0.4444 |
523
+ | cosine_accuracy@3 | 0.6667 |
524
+ | cosine_accuracy@5 | 0.7222 |
525
+ | cosine_accuracy@10 | 0.7778 |
526
+ | cosine_precision@1 | 0.4444 |
527
+ | cosine_precision@3 | 0.2222 |
528
+ | cosine_precision@5 | 0.1444 |
529
+ | cosine_precision@10 | 0.0778 |
530
+ | cosine_recall@1 | 0.4444 |
531
+ | cosine_recall@3 | 0.6667 |
532
+ | cosine_recall@5 | 0.7222 |
533
+ | cosine_recall@10 | 0.7778 |
534
+ | cosine_ndcg@10 | 0.6174 |
535
+ | cosine_mrr@10 | 0.5653 |
536
+ | **cosine_map@100** | **0.5729** |
537
+
538
+ #### Information Retrieval
539
+ * Dataset: `dim_64`
540
+ * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
541
+
542
+ | Metric | Value |
543
+ |:--------------------|:-----------|
544
+ | cosine_accuracy@1 | 0.3889 |
545
+ | cosine_accuracy@3 | 0.6111 |
546
+ | cosine_accuracy@5 | 0.6667 |
547
+ | cosine_accuracy@10 | 0.7778 |
548
+ | cosine_precision@1 | 0.3889 |
549
+ | cosine_precision@3 | 0.2037 |
550
+ | cosine_precision@5 | 0.1333 |
551
+ | cosine_precision@10 | 0.0778 |
552
+ | cosine_recall@1 | 0.3889 |
553
+ | cosine_recall@3 | 0.6111 |
554
+ | cosine_recall@5 | 0.6667 |
555
+ | cosine_recall@10 | 0.7778 |
556
+ | cosine_ndcg@10 | 0.5655 |
557
+ | cosine_mrr@10 | 0.4992 |
558
+ | **cosine_map@100** | **0.5079** |
559
+
560
+ <!--
561
+ ## Bias, Risks and Limitations
562
+
563
+ *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
564
+ -->
565
+
566
+ <!--
567
+ ### Recommendations
568
+
569
+ *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
570
+ -->
571
+
572
+ ## Training Details
573
+
574
+ ### Training Dataset
575
+
576
+ #### Unnamed Dataset
577
+
578
+
579
+ * Size: 160 training samples
580
+ * Columns: <code>positive</code> and <code>anchor</code>
581
+ * Approximate statistics based on the first 1000 samples:
582
+ | | positive | anchor |
583
+ |:--------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
584
+ | type | string | string |
585
+ | details | <ul><li>min: 18 tokens</li><li>mean: 93.95 tokens</li><li>max: 381 tokens</li></ul> | <ul><li>min: 13 tokens</li><li>mean: 20.32 tokens</li><li>max: 34 tokens</li></ul> |
586
+ * Samples:
587
+ | positive | anchor |
588
+ |:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------|
589
+ | <code>Priya Softweb's HR Manual provides valuable insights into the company's culture and values. Key takeaways include: * **Structure and Transparency:** The company emphasizes a structured and transparent approach to its HR processes. This is evident in its clear policies for recruitment, performance appraisals, compensation, leave management, work-from-home arrangements, and incident reporting. * **Professionalism and Ethics:** Priya Softweb places a high value on professionalism and ethical conduct. Its dress code, guidelines for mobile phone usage, and strict policies against tobacco use within the office all point toward a commitment to maintaining a professional and respectful work environment. * **Employee Well-being:** The company demonstrates a genuine concern for the well-being of its employees. This is reflected in its comprehensive leave policies, flexible work-from-home arrangements, and efforts to promote a healthy and clean workspace. * **Diversity and Inclusion:** Priya Softweb is committed to fostering a diverse and inclusive workplace, where employees from all backgrounds feel valued and respected. Its DEI policy outlines the company's commitment to equal opportunities, diverse hiring practices, and inclusive benefits and policies. * **Continuous Learning and Development:** The company encourages a culture of continuous learning and development, providing opportunities for employees to expand their skillsets and stay current with industry advancements. This is evident in its policies for Ethics & Compliance training and its encouragement of utilizing idle time for self-learning and exploring new technologies. Overall, Priya Softweb's HR Manual reveals a company culture that prioritizes structure, transparency, professionalism, employee well-being, diversity, and a commitment to continuous improvement. The company strives to create a supportive and growth-oriented work environment where employees feel valued and empowered to succeed.</code> | <code>What are the key takeaways from Priya Softweb's HR Manual regarding the company's culture and values?</code> |
590
+ | <code>Priya Softweb provides allocated basement parking facilities for employees to park their two-wheelers and four-wheelers. However, parking on the ground floor, around the lawn or main premises, is strictly prohibited as this space is reserved for Directors. Employees should use the parking under wings 5 and 6, while other parking spaces are allocated to different wings. Parking two-wheelers in the car parking zone is not permitted, even if space is available. Two-wheelers should be parked in the designated basement space on the main stand, not on the side stand. Employees are encouraged to park in common spaces on a first-come, first-served basis. The company clarifies that it is not responsible for providing parking and that employees park their vehicles at their own risk. This comprehensive parking policy ensures organized parking arrangements and clarifies the company's liability regarding vehicle safety.</code> | <code>What are the parking arrangements at Priya Softweb?</code> |
591
+ | <code>Investments and declarations must be submitted on or before the 25th of each month through OMS at Priya Softweb.</code> | <code>What is the deadline for submitting investments and declarations at Priya Softweb?</code> |
592
+ * Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
593
+ ```json
594
+ {
595
+ "loss": "MultipleNegativesRankingLoss",
596
+ "matryoshka_dims": [
597
+ 768,
598
+ 512,
599
+ 256,
600
+ 128,
601
+ 64
602
+ ],
603
+ "matryoshka_weights": [
604
+ 1,
605
+ 1,
606
+ 1,
607
+ 1,
608
+ 1
609
+ ],
610
+ "n_dims_per_step": -1
611
+ }
612
+ ```
613
+
614
+ ### Training Hyperparameters
615
+ #### Non-Default Hyperparameters
616
+
617
+ - `eval_strategy`: epoch
618
+ - `per_device_train_batch_size`: 32
619
+ - `per_device_eval_batch_size`: 16
620
+ - `gradient_accumulation_steps`: 16
621
+ - `learning_rate`: 2e-05
622
+ - `num_train_epochs`: 4
623
+ - `lr_scheduler_type`: cosine
624
+ - `warmup_ratio`: 0.1
625
+ - `load_best_model_at_end`: True
626
+ - `optim`: adamw_torch_fused
627
+ - `batch_sampler`: no_duplicates
628
+
629
+ #### All Hyperparameters
630
+ <details><summary>Click to expand</summary>
631
+
632
+ - `overwrite_output_dir`: False
633
+ - `do_predict`: False
634
+ - `eval_strategy`: epoch
635
+ - `prediction_loss_only`: True
636
+ - `per_device_train_batch_size`: 32
637
+ - `per_device_eval_batch_size`: 16
638
+ - `per_gpu_train_batch_size`: None
639
+ - `per_gpu_eval_batch_size`: None
640
+ - `gradient_accumulation_steps`: 16
641
+ - `eval_accumulation_steps`: None
642
+ - `learning_rate`: 2e-05
643
+ - `weight_decay`: 0.0
644
+ - `adam_beta1`: 0.9
645
+ - `adam_beta2`: 0.999
646
+ - `adam_epsilon`: 1e-08
647
+ - `max_grad_norm`: 1.0
648
+ - `num_train_epochs`: 4
649
+ - `max_steps`: -1
650
+ - `lr_scheduler_type`: cosine
651
+ - `lr_scheduler_kwargs`: {}
652
+ - `warmup_ratio`: 0.1
653
+ - `warmup_steps`: 0
654
+ - `log_level`: passive
655
+ - `log_level_replica`: warning
656
+ - `log_on_each_node`: True
657
+ - `logging_nan_inf_filter`: True
658
+ - `save_safetensors`: True
659
+ - `save_on_each_node`: False
660
+ - `save_only_model`: False
661
+ - `restore_callback_states_from_checkpoint`: False
662
+ - `no_cuda`: False
663
+ - `use_cpu`: False
664
+ - `use_mps_device`: False
665
+ - `seed`: 42
666
+ - `data_seed`: None
667
+ - `jit_mode_eval`: False
668
+ - `use_ipex`: False
669
+ - `bf16`: False
670
+ - `fp16`: False
671
+ - `fp16_opt_level`: O1
672
+ - `half_precision_backend`: auto
673
+ - `bf16_full_eval`: False
674
+ - `fp16_full_eval`: False
675
+ - `tf32`: None
676
+ - `local_rank`: 0
677
+ - `ddp_backend`: None
678
+ - `tpu_num_cores`: None
679
+ - `tpu_metrics_debug`: False
680
+ - `debug`: []
681
+ - `dataloader_drop_last`: False
682
+ - `dataloader_num_workers`: 0
683
+ - `dataloader_prefetch_factor`: None
684
+ - `past_index`: -1
685
+ - `disable_tqdm`: False
686
+ - `remove_unused_columns`: True
687
+ - `label_names`: None
688
+ - `load_best_model_at_end`: True
689
+ - `ignore_data_skip`: False
690
+ - `fsdp`: []
691
+ - `fsdp_min_num_params`: 0
692
+ - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
693
+ - `fsdp_transformer_layer_cls_to_wrap`: None
694
+ - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
695
+ - `deepspeed`: None
696
+ - `label_smoothing_factor`: 0.0
697
+ - `optim`: adamw_torch_fused
698
+ - `optim_args`: None
699
+ - `adafactor`: False
700
+ - `group_by_length`: False
701
+ - `length_column_name`: length
702
+ - `ddp_find_unused_parameters`: None
703
+ - `ddp_bucket_cap_mb`: None
704
+ - `ddp_broadcast_buffers`: False
705
+ - `dataloader_pin_memory`: True
706
+ - `dataloader_persistent_workers`: False
707
+ - `skip_memory_metrics`: True
708
+ - `use_legacy_prediction_loop`: False
709
+ - `push_to_hub`: False
710
+ - `resume_from_checkpoint`: None
711
+ - `hub_model_id`: None
712
+ - `hub_strategy`: every_save
713
+ - `hub_private_repo`: False
714
+ - `hub_always_push`: False
715
+ - `gradient_checkpointing`: False
716
+ - `gradient_checkpointing_kwargs`: None
717
+ - `include_inputs_for_metrics`: False
718
+ - `eval_do_concat_batches`: True
719
+ - `fp16_backend`: auto
720
+ - `push_to_hub_model_id`: None
721
+ - `push_to_hub_organization`: None
722
+ - `mp_parameters`:
723
+ - `auto_find_batch_size`: False
724
+ - `full_determinism`: False
725
+ - `torchdynamo`: None
726
+ - `ray_scope`: last
727
+ - `ddp_timeout`: 1800
728
+ - `torch_compile`: False
729
+ - `torch_compile_backend`: None
730
+ - `torch_compile_mode`: None
731
+ - `dispatch_batches`: None
732
+ - `split_batches`: None
733
+ - `include_tokens_per_second`: False
734
+ - `include_num_input_tokens_seen`: False
735
+ - `neftune_noise_alpha`: None
736
+ - `optim_target_modules`: None
737
+ - `batch_eval_metrics`: False
738
+ - `batch_sampler`: no_duplicates
739
+ - `multi_dataset_batch_sampler`: proportional
740
+
741
+ </details>
742
+
743
+ ### Training Logs
744
+ | Epoch | Step | dim_128_cosine_map@100 | dim_256_cosine_map@100 | dim_512_cosine_map@100 | dim_64_cosine_map@100 | dim_768_cosine_map@100 |
745
+ |:-------:|:-----:|:----------------------:|:----------------------:|:----------------------:|:---------------------:|:----------------------:|
746
+ | **1.0** | **1** | **0.5729** | **0.5863** | **0.6595** | **0.5079** | **0.6896** |
747
+ | 2.0 | 2 | 0.5729 | 0.5863 | 0.6595 | 0.5079 | 0.6896 |
748
+ | 3.0 | 3 | 0.5729 | 0.5863 | 0.6595 | 0.5079 | 0.6896 |
749
+ | 3.2 | 4 | 0.5729 | 0.5863 | 0.6595 | 0.5079 | 0.6896 |
750
+
751
+ * The bold row denotes the saved checkpoint.
752
+
753
+ ### Framework Versions
754
+ - Python: 3.10.12
755
+ - Sentence Transformers: 3.0.1
756
+ - Transformers: 4.41.2
757
+ - PyTorch: 2.1.2+cu121
758
+ - Accelerate: 0.31.0
759
+ - Datasets: 2.19.1
760
+ - Tokenizers: 0.19.1
761
+
762
+ ## Citation
763
+
764
+ ### BibTeX
765
+
766
+ #### Sentence Transformers
767
+ ```bibtex
768
+ @inproceedings{reimers-2019-sentence-bert,
769
+ title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
770
+ author = "Reimers, Nils and Gurevych, Iryna",
771
+ booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
772
+ month = "11",
773
+ year = "2019",
774
+ publisher = "Association for Computational Linguistics",
775
+ url = "https://arxiv.org/abs/1908.10084",
776
+ }
777
+ ```
778
+
779
+ #### MatryoshkaLoss
780
+ ```bibtex
781
+ @misc{kusupati2024matryoshka,
782
+ title={Matryoshka Representation Learning},
783
+ author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
784
+ year={2024},
785
+ eprint={2205.13147},
786
+ archivePrefix={arXiv},
787
+ primaryClass={cs.LG}
788
+ }
789
+ ```
790
+
791
+ #### MultipleNegativesRankingLoss
792
+ ```bibtex
793
+ @misc{henderson2017efficient,
794
+ title={Efficient Natural Language Response Suggestion for Smart Reply},
795
+ author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
796
+ year={2017},
797
+ eprint={1705.00652},
798
+ archivePrefix={arXiv},
799
+ primaryClass={cs.CL}
800
+ }
801
+ ```
802
+
803
+ <!--
804
+ ## Glossary
805
+
806
+ *Clearly define terms in order to be accessible across audiences.*
807
+ -->
808
+
809
+ <!--
810
+ ## Model Card Authors
811
+
812
+ *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
813
+ -->
814
+
815
+ <!--
816
+ ## Model Card Contact
817
+
818
+ *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
819
+ -->
config.json ADDED
@@ -0,0 +1,32 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "BAAI/bge-base-en-v1.5",
3
+ "architectures": [
4
+ "BertModel"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "classifier_dropout": null,
8
+ "gradient_checkpointing": false,
9
+ "hidden_act": "gelu",
10
+ "hidden_dropout_prob": 0.1,
11
+ "hidden_size": 768,
12
+ "id2label": {
13
+ "0": "LABEL_0"
14
+ },
15
+ "initializer_range": 0.02,
16
+ "intermediate_size": 3072,
17
+ "label2id": {
18
+ "LABEL_0": 0
19
+ },
20
+ "layer_norm_eps": 1e-12,
21
+ "max_position_embeddings": 512,
22
+ "model_type": "bert",
23
+ "num_attention_heads": 12,
24
+ "num_hidden_layers": 12,
25
+ "pad_token_id": 0,
26
+ "position_embedding_type": "absolute",
27
+ "torch_dtype": "float32",
28
+ "transformers_version": "4.41.2",
29
+ "type_vocab_size": 2,
30
+ "use_cache": true,
31
+ "vocab_size": 30522
32
+ }
config_sentence_transformers.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "__version__": {
3
+ "sentence_transformers": "3.0.1",
4
+ "transformers": "4.41.2",
5
+ "pytorch": "2.1.2+cu121"
6
+ },
7
+ "prompts": {},
8
+ "default_prompt_name": null,
9
+ "similarity_fn_name": null
10
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0d4db737f56aaea90796b5a8d219de0eee958295a575c611f6b417ad340151da
3
+ size 437951328
modules.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.models.Transformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_Pooling",
12
+ "type": "sentence_transformers.models.Pooling"
13
+ },
14
+ {
15
+ "idx": 2,
16
+ "name": "2",
17
+ "path": "2_Normalize",
18
+ "type": "sentence_transformers.models.Normalize"
19
+ }
20
+ ]
sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 512,
3
+ "do_lower_case": true
4
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": {
3
+ "content": "[CLS]",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "mask_token": {
10
+ "content": "[MASK]",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": {
17
+ "content": "[PAD]",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "sep_token": {
24
+ "content": "[SEP]",
25
+ "lstrip": false,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ },
30
+ "unk_token": {
31
+ "content": "[UNK]",
32
+ "lstrip": false,
33
+ "normalized": false,
34
+ "rstrip": false,
35
+ "single_word": false
36
+ }
37
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,57 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "[PAD]",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "100": {
12
+ "content": "[UNK]",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "101": {
20
+ "content": "[CLS]",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "102": {
28
+ "content": "[SEP]",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "103": {
36
+ "content": "[MASK]",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ }
43
+ },
44
+ "clean_up_tokenization_spaces": true,
45
+ "cls_token": "[CLS]",
46
+ "do_basic_tokenize": true,
47
+ "do_lower_case": true,
48
+ "mask_token": "[MASK]",
49
+ "model_max_length": 512,
50
+ "never_split": null,
51
+ "pad_token": "[PAD]",
52
+ "sep_token": "[SEP]",
53
+ "strip_accents": null,
54
+ "tokenize_chinese_chars": true,
55
+ "tokenizer_class": "BertTokenizer",
56
+ "unk_token": "[UNK]"
57
+ }
vocab.txt ADDED
The diff for this file is too large to render. See raw diff